Initial commit

This commit is contained in:
bdbrd 2025-10-22 15:51:24 +02:00
commit cbc51f523e
125 changed files with 34817 additions and 0 deletions

674
LICENSE Normal file
View file

@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

102
Makefile Normal file
View file

@ -0,0 +1,102 @@
BUILD_DIR=build
CC = gcc
AS = nasm
LD = ld
CFLAGS += -Wall \
-Wextra \
-std=gnu11 \
-ffreestanding \
-fno-stack-protector \
-fno-stack-check \
-fno-lto \
-fPIE \
-m64 \
-march=x86-64 \
-mno-80387 \
-mno-mmx \
-mno-sse \
-mno-sse2 \
-mno-red-zone \
-I src/include \
-O0 \
-ggdb3 \
-g
CDEBUG = -g
LDFLAGS += -m elf_x86_64 \
-nostdlib \
-static \
-pie \
--no-dynamic-linker \
-z text \
-z max-page-size=0x1000 \
-T linker.ld
NASMFLAGS = -f elf64
dependencies:
# build limine
rm -rf limine
git clone https://github.com/limine-bootloader/limine.git --branch=v8.x-binary --depth=1
make -C limine
# clone flanterm
rm -rf src/flanterm
git clone https://codeberg.org/mintsuki/flanterm src/flanterm
all:
# make build directory
mkdir -p $(BUILD_DIR) || true
# build & link boot and kernel files
$(CC) -c src/main.c -o $(BUILD_DIR)/main.o $(CFLAGS)
$(CC) -c src/flanterm/src/flanterm.c -o $(BUILD_DIR)/flanterm.o $(CFLAGS)
$(CC) -c src/flanterm/src/flanterm_backends/fb.c -o $(BUILD_DIR)/fb.o $(CFLAGS)
$(CC) -c src/lib/string.c -o $(BUILD_DIR)/string.o $(CFLAGS)
$(CC) -c src/lib/stdio.c -o $(BUILD_DIR)/stdio.o $(CFLAGS)
$(CC) -c src/lib/io.c -o $(BUILD_DIR)/io.o $(CFLAGS)
$(CC) -c src/lib/spinlock.c -o $(BUILD_DIR)/spinlock.o $(CFLAGS)
$(CC) -c src/hal/gdt.c -o $(BUILD_DIR)/gdt.o $(CFLAGS)
$(AS) src/hal/gdt.asm -o $(BUILD_DIR)/gdt_asm.o $(NASMFLAGS)
$(CC) -c src/hal/idt.c -o $(BUILD_DIR)/idt.o $(CFLAGS)
$(AS) src/hal/idt.asm -o $(BUILD_DIR)/idt_asm.o $(NASMFLAGS)
$(CC) -c src/hal/apic.c -o $(BUILD_DIR)/apic.o $(CFLAGS)
$(CC) -c src/hal/ioapic.c -o $(BUILD_DIR)/ioapic.o $(CFLAGS)
$(CC) -c src/hal/timer.c -o $(BUILD_DIR)/timer.o $(CFLAGS)
$(CC) -c src/hal/smp.c -o $(BUILD_DIR)/smp.o $(CFLAGS)
$(CC) -c src/hal/tsc.c -o $(BUILD_DIR)/tsc.o $(CFLAGS)
$(CC) -c src/mm/pmm.c -o $(BUILD_DIR)/pmm.o $(CFLAGS)
$(CC) -c src/mm/vmm.c -o $(BUILD_DIR)/vmm.o $(CFLAGS)
$(CC) -c src/mm/kmalloc.c -o $(BUILD_DIR)/kmalloc.o $(CFLAGS)
$(CC) -c src/sys/acpi.c -o $(BUILD_DIR)/acpi.o $(CFLAGS)
$(CC) -c src/sys/pci.c -o $(BUILD_DIR)/pci.o $(CFLAGS)
$(CC) -c src/drivers/serial.c -o $(BUILD_DIR)/serial.o $(CFLAGS)
$(CC) -c src/drivers/pmt.c -o $(BUILD_DIR)/pmt.o $(CFLAGS)
$(CC) -c src/drivers/ahci.c -o $(BUILD_DIR)/ahci.o $(CFLAGS)
$(CC) -c src/scheduler/sched.c -o $(BUILD_DIR)/sched.o $(CFLAGS)
$(AS) src/scheduler/sched.asm -o $(BUILD_DIR)/sched_asm.o $(NASMFLAGS)
# link everything to an elf
$(LD) -o $(BUILD_DIR)/SFB25.elf $(BUILD_DIR)/*.o $(LDFLAGS)
# Create a directory which will be our ISO root.
mkdir -p iso_root
# Copy the relevant files over.
cp -v $(BUILD_DIR)/SFB25.elf limine.conf limine/limine-bios.sys \
limine/limine-bios-cd.bin limine/limine-uefi-cd.bin iso_root/
# Create the EFI boot tree and copy Limine's EFI executables over.
mkdir -p iso_root/EFI/BOOT
cp -v limine/BOOTX64.EFI iso_root/EFI/BOOT/
cp -v limine/BOOTIA32.EFI iso_root/EFI/BOOT/
# Create the bootable ISO.
xorriso -as mkisofs -b limine-bios-cd.bin \
-no-emul-boot -boot-load-size 4 -boot-info-table \
--efi-boot limine-uefi-cd.bin \
-efi-boot-part --efi-boot-image --protective-msdos-label \
iso_root -o $(BUILD_DIR)/SFB25.iso
# Install Limine stage 1 and 2 for legacy BIOS boot.
./limine/limine bios-install $(BUILD_DIR)/SFB25.iso
disk:
dd if=/dev/zero of=disk.img bs=1M count=128
elftest:
$(CC) src/elf/elftest.c -o $(BUILD_DIR)/elftest -ffreestanding -Isrc/include -static -fPIE -nostdlib

21
README.md Normal file
View file

@ -0,0 +1,21 @@
# SFB/25
Hobby operating system for the x86_64 architecture written in C. Licensed under GPLv3
## How to build
First run `make dependencies` to clone and build Limine and Flanterm
Then run `make all` - make sure to adjust the `CC`, `AS` and `LD` flags to match your cross-compiling toolchain
in the `build` folder you should have a `SFB25.iso` file.
To try out SFB/25 you can use QEMU:
`qemu-system-x86_64 build/SFB25.iso -m 512M`
## External projects
- [Limine bootloader](https://github.com/limine-bootloader/limine) for the bootloader
- [Flanterm](https://github.com/mintsuki/flanterm) for the terminal
- [uACPI](https://github.com/uacpi/uacpi) for the AML interpreter and other ACPI stuff

5
bochsrc Normal file
View file

@ -0,0 +1,5 @@
display_library: x, options="gui_debug"
ata0-master: type=cdrom, path="build/SFB25.iso", status=inserted
boot: cdrom
memory: guest=512, host=512
cpu: count=3, ips=95000000

26
bx_enh_dbg.ini Normal file
View file

@ -0,0 +1,26 @@
# bx_enh_dbg_ini
SeeReg[0] = TRUE
SeeReg[1] = TRUE
SeeReg[2] = TRUE
SeeReg[3] = TRUE
SeeReg[4] = FALSE
SeeReg[5] = FALSE
SeeReg[6] = FALSE
SeeReg[7] = FALSE
SingleCPU = FALSE
ShowIOWindows = TRUE
ShowButtons = TRUE
SeeRegColors = TRUE
ignoreNxtT = TRUE
ignSSDisasm = TRUE
UprCase = 0
DumpInAsciiMode = 3
isLittleEndian = TRUE
DefaultAsmLines = 512
DumpWSIndex = 0
DockOrder = 0x123
ListWidthPix[0] = 158
ListWidthPix[1] = 218
ListWidthPix[2] = 250
MainWindow = 0, 0, 714, 500
FontName = Normal

4
compile_flags.txt Normal file
View file

@ -0,0 +1,4 @@
-I./src/include
-Wall
-Wno-incompatible-library-redeclaration
-Wextra

20
limine.conf Normal file
View file

@ -0,0 +1,20 @@
# Timeout in seconds that Limine will use before automatically booting.
timeout: 5
# The entry name that will be displayed in the boot menu.
/SFB25 (KASLR on)
# We use the Limine boot protocol.
protocol: limine
# Path to the kernel to boot. boot:/// represents the partition on which limine.cfg is located.
kernel_path: boot():/SFB25.elf
# Same thing, but without KASLR.
/SFB25 (KASLR off)
# We use the Limine boot protocol.
protocol: limine
kaslr: no
# Path to the kernel to boot. boot:/// represents the partition on which limine.cfg is located.
kernel_path: boot():/SFB25.elf

69
linker.ld Normal file
View file

@ -0,0 +1,69 @@
OUTPUT_FORMAT(elf64-x86-64)
OUTPUT_ARCH(i386:x86-64)
/* We want the symbol _start to be our entry point */
ENTRY(_start)
/* Define the program headers we want so the bootloader gives us the right */
/* MMU permissions */
PHDRS
{
text PT_LOAD FLAGS((1 << 0) | (1 << 2)) ; /* Execute + Read */
rodata PT_LOAD FLAGS((1 << 2)) ; /* Read only */
data PT_LOAD FLAGS((1 << 1) | (1 << 2)) ; /* Write + Read */
dynamic PT_DYNAMIC FLAGS((1 << 1) | (1 << 2)) ; /* Dynamic PHDR for relocations */
}
SECTIONS
{
/* We wanna be placed in the topmost 2GiB of the address space, for optimisations */
/* and because that is what the Limine spec mandates. */
/* Any address in this region will do, but often 0xffffffff80000000 is chosen as */
/* that is the beginning of the region. */
. = 0xffffffff80000000;
text_start_addr = .;
.text : {
*(.text .text.*)
} :text
text_end_addr = .;
/* Move to the next memory page for .rodata */
. += CONSTANT(MAXPAGESIZE);
. = ALIGN(0x1000);
rodata_start_addr = .;
.rodata : {
*(.rodata .rodata.*)
} :rodata
rodata_end_addr = .;
/* Move to the next memory page for .data */
. += CONSTANT(MAXPAGESIZE);
. = ALIGN(0x1000);
data_start_addr = .;
.data : {
*(.data .data.*)
} :data
/* Dynamic section for relocations, both in its own PHDR and inside data PHDR */
.dynamic : {
*(.dynamic)
} :data :dynamic
/* NOTE: .bss needs to be the last thing mapped to :data, otherwise lots of */
/* unnecessary zeros will be written to the binary. */
/* If you need, for example, .init_array and .fini_array, those should be placed */
/* above this. */
.bss : {
*(.bss .bss.*)
*(COMMON)
} :data
data_end_addr = .;
/* Discard .note.* and .eh_frame since they may cause issues on some hosts. */
/DISCARD/ : {
*(.eh_frame)
*(.note .note.*)
}
}

75
src/drivers/ahci.c Normal file
View file

@ -0,0 +1,75 @@
#include <SFB25.h>
#include <stdio.h>
#include "../hal/apic.h"
#include "../sys/pci.h"
#include "../mm/vmm.h"
#include "../hal/idt.h"
#define AHCI_MSE 0x02
#define AHCI_BME 0x03
#define AHCI_INT_ENABLED (1 << 10)
#define AHCI_CLASS_ID 0x01
#define AHCI_SUBCLASS_ID 0x06
#define AHCI_HOST_CAP_REG 0x00
#define AHCI_GHC_REG 0X04
#define AHCI_INT_STATUS_REG 0x08
#define AHCI_PORTS_IMPL_REG 0x0C
#define AHCI_BOHC_REG 0x28
uint64_t ahci_base_address = 0;
uint32_t ahci_read_reg(uint16_t reg){
return *(uint32_t*)((uint64_t)ahci_base_address + reg);
}
void ahci_write_reg(uint16_t reg, uint32_t data){
*(uint32_t*)((uint64_t)ahci_base_address + reg) = data;
}
void ahci_init(){
pci_header_0_t *header = (pci_header_0_t *)pci_find_device(AHCI_CLASS_ID, AHCI_SUBCLASS_ID);
if(!header){
klog(LOG_ERROR, __func__, "AHCI controller not found!");
kkill();
}
kprintf("size of header: 0x{xn}", sizeof(pci_header_0_t));
extern uint64_t hhdmoffset;
ahci_base_address = header->bar5 & 0xfffff000;
kprintf("ahci: 0x{x}\n", ahci_base_address);
/* Enable bus master, memory space and interrupts */
header->header.command |= AHCI_MSE | AHCI_BME | AHCI_INT_ENABLED;
/* Map the AHCI registers */
kernel_map_pages((uint64_t*)ahci_base_address, 1, PTE_BIT_RW | PTE_BIT_NX | PTE_BIT_UNCACHABLE);
ahci_base_address += hhdmoffset;
/* BIOS/OS Handoff */
kprintf("ahci: Performing BIOS/OS handoff\n");
ahci_write_reg(AHCI_BOHC_REG, ahci_read_reg(AHCI_BOHC_REG) | 0x2); // Set the OS Owned Semaphore bit - OS now owns the HBA
uint32_t bohc = ahci_read_reg(AHCI_BOHC_REG);
/* Wait for the handoff to complete*/
while((bohc & 0b01) != 1 && (bohc & 0b1) != 0){
apic_sleep(200);
bohc = ahci_read_reg(AHCI_BOHC_REG);
}
/* Reset the controller */
ahci_write_reg(AHCI_GHC_REG, 1);
/* Set the IRQ and enable interrupts */
kprintf("ahci: Requesting pin {d}\n", header->interrupt_pin);
}

1
src/drivers/ahci.h Normal file
View file

@ -0,0 +1 @@
void ahci_init();

79
src/drivers/pmt.c Normal file
View file

@ -0,0 +1,79 @@
#include "../sys/acpi.h"
#include <stdio.h>
#include <SFB25.h>
#include <io.h>
#define PMT_TIMER_RATE 3579545
#define X_PMT_TMR_BLOCK_OFFSET 208
fadt_t *fadt;
extern uint64_t hhdmoffset;
/* Use extended address fields instead */
bool use_ext = false;
uint64_t pmt_read_reg(gas_t X_PMTimerBlock){
/* TO FIX - address space id invalid in Bochs */
/* Check address space ID field to understand how to access the register */
if(X_PMTimerBlock.address_space_id == 0x00){
/* Access through memory */
return *((uint64_t*)X_PMTimerBlock.address);
}else if (X_PMTimerBlock.address_space_id == 0x01){
/* Access through I/O port */
return inl(X_PMTimerBlock.address);
}else{
serial_kprintf("address id: 0x{xn}", X_PMTimerBlock.address_space_id);
klog(LOG_ERROR, __func__, "X_PMTimerBlock address space id isn't supported!");
return 0;
}
}
int pmt_init(){
fadt = (fadt_t*)((uint64_t)find_acpi_table("FACP"));
if(!fadt){
klog(LOG_ERROR, __func__, "Didn't find FADT table");
kkill();
}
fadt = (fadt_t*)((uint64_t)fadt + hhdmoffset);
/* Check if timer exists */
if(fadt->PMTimerLength == 0){
return -1;
}
/* If ACPI revision is over or equal 2 and if X_PMTimerBlock isnt 0, then use X_PMTimerBlock */
if(fadt->header.revision >= 2 && fadt->X_PMTimerBlock.address != 0 && (fadt->header.length >= X_PMT_TMR_BLOCK_OFFSET)){
serial_kprintf("pmt: Using the X_PMTimerBlock\n");
use_ext = true;
}
return 0;
}
void pmt_delay(uint64_t us){
uint64_t count;
if(!use_ext){
count = inl(fadt->PMTimerBlock);
}else{
count = pmt_read_reg(fadt->X_PMTimerBlock);
}
uint64_t target = (us * PMT_TIMER_RATE) / 1000000;
uint64_t current = 0;
while(current < target){
if(!use_ext){
current = ((inl(fadt->PMTimerBlock) - count) & 0xffffff);
}else{
current = (pmt_read_reg(fadt->X_PMTimerBlock) - count) & 0xffffff;
}
}
}

3
src/drivers/pmt.h Normal file
View file

@ -0,0 +1,3 @@
#include <stdint.h>
int pmt_init();
void pmt_delay(uint64_t us);

0
src/drivers/rtc.c Normal file
View file

76
src/drivers/serial.c Normal file
View file

@ -0,0 +1,76 @@
#include "../sys/acpi.h"
#include "../hal/ioapic.h"
#include <io.h>
#include <stdio.h>
#define COM1 0x3F8
#define LINE_CTRL_REG 0x3
#define LINE_STAT_REG 0x5
#define INT_ENABLE_REG 0x1
#define MDM_CTRL_REG 0x4
#define DIVISOR 0x12 // We want a baud rate of 9600, so the divisor here is 12
bool serial_enabled = false;
void serial_init(){
/* Disable interrupts */
outb(COM1 + INT_ENABLE_REG, 0);
/* Set the DLAB bit */
outb(COM1 + LINE_CTRL_REG, (1 << 7));
/* Send least significant byte of divisor */
outb(COM1 + 1, 0);
/* Send most significant byte of divisor */
outb(COM1, 12);
/* Clear the DLAB bit */
outb(COM1 + LINE_CTRL_REG, (0 << 7));
/* Set the character length to 8 bits, parity to none and stop bit to 1 */
outb(COM1 + LINE_CTRL_REG, 0b00000011);
/* Set DTR, RTS and enables IRQ */
outb(COM1 + MDM_CTRL_REG, 0b00001101);
/* Set loopback testing mode to see if UART werks */
outb(COM1 + MDM_CTRL_REG, 0b00011110);
outb(COM1, 0xAE);
if(inb(COM1) != 0xAE){
klog(LOG_WARN, __func__, "Serial controller failed test, serial output will not work");
return;
}
/* Disable loopback and set DTR bit */
outb(COM1 + MDM_CTRL_REG, 0b00001111);
serial_enabled = true;
}
uint8_t serial_read(){
while((inb(COM1 + LINE_STAT_REG) & 0x1) == 0){ asm("nop"); }
return inb(COM1);
}
void serial_write(uint8_t data){
while((inb(COM1 + LINE_STAT_REG) & (1 << 5)) == 0){ asm("nop"); }
outb(COM1, data);
}
void serial_print(char *str){
uint64_t i = 0;
while (str[i] != '\0') {
serial_write(str[i]);
i++;
}
}

8
src/drivers/serial.h Normal file
View file

@ -0,0 +1,8 @@
#include <stdint.h>
void serial_write(uint8_t data);
uint8_t serial_read();
void serial_print(char *str);
void serial_init();

24
src/elf/elf.c Normal file
View file

@ -0,0 +1,24 @@
#include <error.h>
#include <stdint.h>
#include "elf.h"
kstatus check_elf(elf64_ehdr *ehdr){
if(!ehdr){
return KERNEL_STATUS_ERROR;
}
if( ehdr->e_ident[0] == ELFMAG0 && ehdr->e_ident[1] == ELFMAG1 &&
ehdr->e_ident[2] == ELFMAG2 && ehdr->e_ident[3] == ELFMAG3){
return KERNEL_STATUS_SUCCESS;
}
return KERNEL_STATUS_ERROR;
}
kstatus kernel_load_elf64(elf64_ehdr *ehdr){
if(!check_elf(ehdr)){
return KERNEL_STATUS_ERROR;
}
}

43
src/elf/elf.h Normal file
View file

@ -0,0 +1,43 @@
#include <stdint.h>
typedef uint64_t elf64_addr;
typedef uint64_t elf64_off;
typedef uint16_t elf64_half;
typedef uint32_t elf64_word;
typedef int32_t elf64_sword;
typedef uint64_t elf64_xword;
typedef int64_t elf64_sxword;
# define ELFMAG0 0x7F // e_ident[EI_MAG0]
# define ELFMAG1 'E' // e_ident[EI_MAG1]
# define ELFMAG2 'L' // e_ident[EI_MAG2]
# define ELFMAG3 'F' // e_ident[EI_MAG3]
typedef struct elf64_ehdr {
uint8_t e_ident[16];
elf64_half e_type;
elf64_half e_machine;
elf64_word e_version;
elf64_addr e_entry;
elf64_off e_phoff;
elf64_off e_shoff;
elf64_word e_flags;
elf64_half e_ehsize;
elf64_half e_phentsize;
elf64_half e_phnum;
elf64_half e_shentsize;
elf64_half e_shnum;
elf64_half e_shstrndx;
}__attribute((packed))elf64_ehdr;
enum e_ident{
EI_MAG0 = 0, // 0x7F
EI_MAG1 = 1, // 'E'
EI_MAG2 = 2, // 'L'
EI_MAG3 = 3, // 'F'
EI_CLASS = 4, // Architecture (32/64)
EI_DATA = 5, // Byte Order
EI_VERSION = 6, // ELF Version
EI_OSABI = 7, // OS Specific
EI_ABIVERSION = 8, // OS Specific
EI_PAD = 9 // Padding
};

5
src/elf/elftest.c Normal file
View file

@ -0,0 +1,5 @@
int _start(){
return 123;
}

137
src/hal/apic.c Normal file
View file

@ -0,0 +1,137 @@
#include "../sys/acpi.h"
#include "../drivers/pmt.h"
#include "smp.h"
#include "timer.h"
#include "ioapic.h"
#include <lock.h>
#include <stdio.h>
#include <SFB25.h>
#include <cpuid.h> // GCC specific
#define LAPIC_ID_REG 0x020
#define LAPIC_EOI_REG 0x0B0
#define LAPIC_SPURIOUS_REG 0x0F0
#define LAPIC_ERR_REG 0x280
#define LAPIC_LINT0_REG 0x350
#define LAPIC_LINT1_REG 0x360
#define LAPIC_ICR_REG 0x300
#define LAPIC_LVT_TIMER_REG 0x320
#define LAPIC_TIMER_INITIAL_CNT_REG 0x380
#define LAPIC_TIMER_CURRENT_CNT_REG 0x390
#define LAPIC_TIMER_DIVIDER_REG 0x3E0
#define LAPIC_TIMER_MASK (1 << 16)
#define LAPIC_TIMER_UNMASK 0xFFFEFFFF
#define LAPIC_TIMER_PERIODIC (1 << 17)
#define LAPIC_TIMER_VECTOR 69
extern madt_t *madt;
extern uint64_t hhdmoffset;
uint64_t lapic_address = 0;
uint64_t timer_speed_us = 0;
void lapic_write_reg(uint32_t reg, uint32_t data){
*((uint32_t*)(lapic_address+reg)) = data;
}
uint32_t lapic_read_reg(uint32_t reg){
return(*((uint32_t*)(lapic_address+reg)));
}
/* Assumes single-threaded*/
void apic_sleep(uint64_t ms){
uint64_t lapic_timer_ticks = get_cpu_struct()->lapic_timer_ticks;
uint64_t curcnt = get_cpu_struct()->lapic_timer_ticks;
while (lapic_timer_ticks - curcnt < ms) {
lapic_timer_ticks = get_cpu_struct()->lapic_timer_ticks;
}
}
atomic_flag lapic_timer_flag = ATOMIC_FLAG_INIT;
void lapic_timer_init(int us){
acquire_lock(&lapic_timer_flag);
/* Stop the APIC timer */
lapic_write_reg(LAPIC_TIMER_INITIAL_CNT_REG, 0);
/* Set the divisor to 16 */
lapic_write_reg(LAPIC_TIMER_DIVIDER_REG, 0b11);
/* Set the intial count to max */
lapic_write_reg(LAPIC_TIMER_INITIAL_CNT_REG, 0xffffffff);
/* Call a delay function based on the available timer */
pmt_delay(us);
/* Mask the timer (prevents interrupts) */
lapic_write_reg(LAPIC_LVT_TIMER_REG, LAPIC_TIMER_MASK);
/* Determine the inital count to be used for a delay set by `timer_speed_us` */
uint32_t calibration = 0xffffffff - lapic_read_reg(LAPIC_TIMER_CURRENT_CNT_REG);
/* Set the timer interrupt vector and put the timer into periodic mode */
lapic_write_reg(LAPIC_LVT_TIMER_REG, LAPIC_TIMER_VECTOR | LAPIC_TIMER_PERIODIC);
/* Set the inital count to the calibration */
lapic_write_reg(LAPIC_TIMER_INITIAL_CNT_REG, calibration);
free_lock(&lapic_timer_flag);
}
void apic_init(void){
asm("cli");
lapic_address = madt->lic_address + hhdmoffset;
lapic_ao_t *lapic_ao = (lapic_ao_t*) find_ics(0x5); // Local APIC Address Override
/* If there is a lapic address override present then use that instead */
if(lapic_ao){
/* Check that the field isnt 0 */
if(lapic_ao->lapic_address != 0){
lapic_address = lapic_ao->lapic_address + hhdmoffset;
}
}
/* Enable the lapic and set the spurious interrupt vector to 0xFF */
lapic_write_reg(LAPIC_SPURIOUS_REG, 0x1FF);
/* Initialize the IOAPIC */
ioapic_init();
/* Start the timers for calibration of the APIC timer */
timer_init();
/* Start the APIC timer with 10ms timer */
lapic_timer_init(10000);
asm("sti");
}
void ap_apic_init(){
asm("cli");
/* Enable the lapic and set the spurious interrupt vector to 0xFF */
lapic_write_reg(LAPIC_SPURIOUS_REG, 0x1FF);
/* Start the APIC timer */
lapic_timer_init(10000);
asm("sti");
}
void apic_timer_handler(){
lapic_write_reg(LAPIC_EOI_REG, 0);
if(get_cpu_struct_initialized()){
get_cpu_struct()->lapic_timer_ticks++;
}
}
void apic_send_ipi(uint8_t dest_field, uint8_t dest_shorthand, uint8_t trigger, uint8_t level, uint8_t status, uint8_t destination, uint8_t delivery_mode, uint8_t vector){
}

4
src/hal/apic.h Normal file
View file

@ -0,0 +1,4 @@
void apic_init(void);
void ap_apic_init();
void apic_sleep(int ms);

34
src/hal/gdt.asm Normal file
View file

@ -0,0 +1,34 @@
[bits 64]
default rel
extern gdtr
global s_load_gdt
s_load_gdt:
lgdt [gdtr]
; move kernel data offset into data registers
mov ax, 0x10
mov ds, ax
mov es, ax
mov ss, ax
; zero the optional data registers
xor ax, ax
mov fs, ax
mov gs, ax
; pop the return instruction pointer from the stack
pop rax
; first push the segment selector we will far return to (0x08 is the code segment)
push 0x08
; then push the return instruction pointer
push rax
; and finally far return
retfq

33
src/hal/gdt.c Normal file
View file

@ -0,0 +1,33 @@
#include "gdt.h"
#include <stdio.h>
gdt_descriptor gdt[5] = {0};
gdt_register gdtr = {sizeof(gdt)-1, (uint64_t)(&gdt)};
extern void s_load_gdt();
void gdt_set_entry(int num, unsigned long long base, unsigned long long limit, unsigned char access, unsigned char granularity){
// descriptor base access
gdt[num].base_low = (base & 0xFFFF);
gdt[num].base_middle = (base >> 16) & 0xFF;
gdt[num].base_high = (base >> 24) & 0xFF;
// descriptor limits
gdt[num].limit_low = (limit & 0xFFFF);
gdt[num].granularity = ((limit >> 16) & 0x0F);
// granularity and access flag
gdt[num].granularity |= (granularity & 0xF) << 4;
gdt[num].access = access;
}
void set_gdt(void){
gdt_set_entry(0, 0, 0, 0, 0); // null segment offset 0x00
gdt_set_entry(1, 0, 0xFFFFF, 0x9A, 0xA); // kernel code offset 0x08
gdt_set_entry(2, 0, 0xFFFFF, 0x92, 0xA); // kernel data offset 0x10
gdt_set_entry(3, 0, 0xFFFFF, 0xFA, 0xA); // userspace code offset 0x18
gdt_set_entry(4, 0, 0xFFFFF, 0xF2, 0xA); // userspace data offset 0x20
s_load_gdt();
}

17
src/hal/gdt.h Normal file
View file

@ -0,0 +1,17 @@
#include <stdint.h>
typedef struct gdt_descriptor {
uint16_t limit_low;
uint16_t base_low;
uint8_t base_middle;
uint8_t access;
uint8_t granularity;
uint8_t base_high;
} __attribute((packed)) gdt_descriptor;
typedef struct gdt_register {
uint16_t limit;
uint64_t base_address;
} __attribute((packed)) gdt_register;
void set_gdt(void);

330
src/hal/idt.asm Normal file
View file

@ -0,0 +1,330 @@
default rel
extern interrupt_handler
extern idtr
global s_isr0
global s_isr1
global s_isr2
global s_isr3
global s_isr4
global s_isr5
global s_isr6
global s_isr7
global s_isr8
global s_isr9
global s_isr10
global s_isr11
global s_isr12
global s_isr13
global s_isr14
global s_isr15
global s_isr16
global s_isr17
global s_isr18
global s_isr19
global s_isr20
global s_isr21
global s_isr22
global s_isr23
global s_isr24
global s_isr25
global s_isr26
global s_isr27
global s_isr28
global s_isr29
global s_isr30
global s_isr31
global s_isr44
global s_isr69
global s_isr70
global s_isr255
global s_load_idt
s_isr0:
;
push qword 0 ; dummy
push qword 0 ; isr num
jmp isr_handler
s_isr1:
push qword 0 ; dummy
push qword 1 ; isr num
jmp isr_handler
s_isr2:
push qword 0 ; dummy
push qword 2 ; isr num
jmp isr_handler
s_isr3:
push qword 0 ; dummy
push qword 3 ; isr num
jmp isr_handler
s_isr4:
push qword 0 ; dummy
push qword 4 ; isr num
jmp isr_handler
s_isr5:
push qword 0 ; dummy
push qword 5 ; isr num
jmp isr_handler
s_isr6:
push qword 0 ; dummy
push qword 6 ; isr num
jmp isr_handler
s_isr7:
push qword 0 ; dummy
push qword 7 ; isr num
jmp isr_handler
s_isr8:
; dont push dummy as it already pushes one
push qword 8 ; isr num
jmp isr_handler
s_isr9:
push qword 0 ; dummy
push qword 9 ; isr num
jmp isr_handler
s_isr10:
; dont push dummy as it already pushes one
push qword 10 ; isr num
jmp isr_handler
s_isr11:
; dont push dummy as it already pushes one
push qword 11 ; isr num
jmp isr_handler
s_isr12:
; dont push dummy as it already pushes one
push qword 12 ; isr num
jmp isr_handler
s_isr13:
; dont push dummy as it already pushes one
push qword 13 ; isr num
jmp isr_handler
s_isr14:
; dont push dummy as it already pushes one
push qword 14 ; isr num
jmp isr_handler
s_isr15:
push qword 0 ; dummy
push qword 15 ; isr num
jmp isr_handler
s_isr16:
push qword 0 ; dummy
push qword 16 ; isr num
jmp isr_handler
s_isr17:
push qword 0 ; dummy
push qword 17 ; isr num
jmp isr_handler
s_isr18:
push qword 0 ; dummy
push qword 18 ; isr num
jmp isr_handler
; 19: Reserved
s_isr19:
push qword 0
push qword 19
jmp isr_handler
; 20: Reserved
s_isr20:
push qword 0
push qword 20
jmp isr_handler
; 21: Reserved
s_isr21:
push qword 0
push qword 21
jmp isr_handler
; 22: Reserved
s_isr22:
push qword 0
push qword 22
jmp isr_handler
; 23: Reserved
s_isr23:
push qword 0
push qword 23
jmp isr_handler
; 24: Reserved
s_isr24:
push qword 0
push qword 24
jmp isr_handler
; 25: Reserved
s_isr25:
push qword 0
push qword 25
jmp isr_handler
; 26: Reserved
s_isr26:
push qword 0
push qword 26
jmp isr_handler
; 27: Reserved
s_isr27:
push qword 0
push qword 27
jmp isr_handler
; 28: Reserved
s_isr28:
push qword 0
push qword 28
jmp isr_handler
; 29: Reserved
s_isr29:
push qword 0
push qword 29
jmp isr_handler
; 30: Reserved
s_isr30:
push qword 0
push qword 30
jmp isr_handler
; 31: Reserved
s_isr31:
push qword 0
push qword 31
jmp isr_handler
s_isr44:
push qword 0
push qword 44
jmp isr_handler
; 69 - APIC timer
s_isr69:
push qword 0
push qword 69
jmp isr_handler
; 70 - Kernel panic
s_isr70:
push qword 0
push qword 70
jmp isr_handler
s_isr255:
push qword 0
push qword 255
jmp isr_handler
%macro pushaq 0
push rax
push rbx
push rcx
push rdx
push rbp
push rsi
push rdi
push r8
push r9
push r10
push r11
push r12
push r13
push r14
push r15
%endmacro
%macro popaq 0
pop r15
pop r14
pop r13
pop r12
pop r11
pop r10
pop r9
pop r8
pop rdi
pop rsi
pop rbp
pop rdx
pop rcx
pop rbx
pop rax
%endmacro
isr_handler:
pushaq
mov rdi, rsp ; put stack frame as parameter for interrupt_handler
call interrupt_handler
popaq
add rsp, 16 ; remove vector and error code from the stack
iretq
s_load_idt:
lidt [idtr]
sti
ret

230
src/hal/idt.c Normal file
View file

@ -0,0 +1,230 @@
#include "idt.h"
#include "error.h"
#include "timer.h"
#include <stdio.h>
#include <lock.h>
#include <SFB25.h>
idt_descriptor idt[256] = {0};
idt_register idtr = {sizeof(idt)-1, (uint64_t)(&idt)};
/* Expand if needed */
#define MAX_IRQ 256
/* IRQ structure list, eventually restructure to support IRQs on multiple cores */
irq_t irq_list[MAX_IRQ] = {0};
extern void s_isr0();
extern void s_isr1();
extern void s_isr2();
extern void s_isr3();
extern void s_isr4();
extern void s_isr5();
extern void s_isr6();
extern void s_isr7();
extern void s_isr8();
extern void s_isr9();
extern void s_isr10();
extern void s_isr11();
extern void s_isr12();
extern void s_isr13();
extern void s_isr14();
extern void s_isr15();
extern void s_isr16();
extern void s_isr17();
extern void s_isr18();
extern void s_isr19();
extern void s_isr20();
extern void s_isr21();
extern void s_isr22();
extern void s_isr23();
extern void s_isr24();
extern void s_isr25();
extern void s_isr26();
extern void s_isr27();
extern void s_isr28();
extern void s_isr29();
extern void s_isr30();
extern void s_isr31();
extern void s_isr44();
extern void s_isr69();
extern void s_isr70();
extern void s_isr255();
extern void s_load_idt();
atomic_flag irq_register_lock = ATOMIC_FLAG_INIT;
/* Registers an IRQ with the specified vector. */
kstatus register_irq_vector(uint8_t vector, void *base, uint8_t flags){
acquire_lock(&irq_register_lock);
if(!irq_list[vector].in_use){
free_lock(&irq_register_lock);
return KERNEL_STATUS_ERROR;
}
set_idt_descriptor(vector, base, flags);
irq_list[vector].base = base;
irq_list[vector].in_use = true;
s_load_idt();
free_lock(&irq_register_lock);
return KERNEL_STATUS_SUCCESS;
}
/* Registers an IRQ and returns the vector */
int register_irq(void *base, uint8_t flags){
acquire_lock(&irq_register_lock);
for(size_t i = 0; i < MAX_IRQ; i++){
if(!irq_list[i].in_use) {
set_idt_descriptor(i, base, flags);
irq_list[i].base = base;
irq_list[i].in_use = true;
free_lock(&irq_register_lock);
s_load_idt();
return i;
}
}
free_lock(&irq_register_lock);
return -1;
}
void set_idt_descriptor(uint8_t vector, void *base, uint8_t flags){
idt[vector].offset_low = ((uint64_t)base & 0xffff);
idt[vector].segment_sel = 0x08; // kernel code segment
idt[vector].ist = 0;
idt[vector].attributes = flags;
idt[vector].offset_high = ((uint64_t)base >> 16) & 0xffff;
idt[vector].offset_higher = ((uint64_t)base >> 32) & 0xffffffff;
idt[vector].reserved = 0;
}
void set_idt(void){
/* Set all the reserved vectors as used */
for(size_t i = 0; i < 32; i++){
irq_list[i].in_use = true;
irq_list[i].base = NULL;
}
set_idt_descriptor(0, s_isr0, 0x8E);
set_idt_descriptor(1, s_isr1, 0x8E);
set_idt_descriptor(2, s_isr2, 0x8E);
set_idt_descriptor(3, s_isr3, 0x8E);
set_idt_descriptor(4, s_isr4, 0x8E);
set_idt_descriptor(5, s_isr5, 0x8E);
set_idt_descriptor(6, s_isr6, 0x8E);
set_idt_descriptor(7, s_isr7, 0x8E);
set_idt_descriptor(8, s_isr8, 0x8E);
set_idt_descriptor(9, s_isr9, 0x8E);
set_idt_descriptor(10, s_isr10, 0x8E);
set_idt_descriptor(11, s_isr11, 0x8E);
set_idt_descriptor(12, s_isr12, 0x8E);
set_idt_descriptor(13, s_isr13, 0x8E);
set_idt_descriptor(14, s_isr14, 0x8E);
set_idt_descriptor(15, s_isr15, 0x8E);
set_idt_descriptor(16, s_isr16, 0x8E);
set_idt_descriptor(17, s_isr17, 0x8E);
set_idt_descriptor(18, s_isr18, 0x8E);
set_idt_descriptor(19, s_isr19, 0x8E);
set_idt_descriptor(20, s_isr20, 0x8E);
set_idt_descriptor(21, s_isr21, 0x8E);
set_idt_descriptor(22, s_isr22, 0x8E);
set_idt_descriptor(23, s_isr23, 0x8E);
set_idt_descriptor(24, s_isr24, 0x8E);
set_idt_descriptor(25, s_isr25, 0x8E);
set_idt_descriptor(26, s_isr26, 0x8E);
set_idt_descriptor(27, s_isr27, 0x8E);
set_idt_descriptor(28, s_isr28, 0x8E);
set_idt_descriptor(29, s_isr29, 0x8E);
set_idt_descriptor(30, s_isr30, 0x8E);
set_idt_descriptor(31, s_isr31, 0x8E);
set_idt_descriptor(44, 0, 0x8E);
set_idt_descriptor(69, s_isr69, 0x8E);
set_idt_descriptor(70, s_isr70, 0x8E);
set_idt_descriptor(255, s_isr255, 0x8E);
s_load_idt();
}
char *exception_messages[] =
{
"Division Error",
"Debug",
"Non Maskable Interrupt",
"Breakpoint",
"Into Detected Overflow",
"Out of Bounds",
"Invalid Opcode",
"Device not available",
"Double Fault",
"Coprocessor Segment Overrun",
"Invalid TSS",
"Segment Not Present",
"Stack Fault",
"General Protection Fault",
"Page Fault",
"x87 FPU Floating-point error",
"Alignment Check",
"Machine Check",
"SIMD Floating-point exception",
"Virtualization exception",
"Control Protection",
"Reserved",
"Reserved",
"Reserved",
"Reserved",
"Reserved",
"Reserved",
"Reserved",
"Reserved",
"Reserved",
"Reserved"
};
void interrupt_handler(interrupt_frame *r){
if(r->int_no < 32){
kprintf("\nOh no! Received interrupt {d}, '{s}'. Below is the provided stack frame{n}{n}", r->int_no, exception_messages[r->int_no]);
kprintf("error code 0x{xn}", r->err);
kprintf("rax 0x{x} | rbx 0x{x} | rcx 0x{x} | rdx 0x{xn}", r->rax, r->rbx, r->rcx, r->rdx);
kprintf("rdi 0x{x} | rsi 0x{x} | rbp 0x{xn}", r->rdi, r->rsi, r->rbp);
kprintf("r8 0x{x} | r9 0x{x} | r10 0x{x} | r11 0x{x} | r12 0x{x} | r13 0x{x} | r14 0x{x} | r15 0x{xn}", r->r8, r->r9, r->r10, r->r11, r->r12, r->r13, r->r14, r->r15);
kprintf("rip 0x{x} | cs 0x{x} | ss 0x{x} | rsp 0x{x} | rflags 0x{xn}", r->rip, r->cs, r->ss, r->rsp, r->rflags);
kkill();
for(;;);
}
if(r->int_no == 255){
kprintf("hey");
}
if(r->int_no == 69){
apic_timer_handler();
}
if(r->int_no == 70){
for(;;){
asm("cli;hlt");
}
}
return;
}

37
src/hal/idt.h Normal file
View file

@ -0,0 +1,37 @@
#include "error.h"
#include <stdbool.h>
#include <stdint.h>
typedef struct idt_descriptor {
uint16_t offset_low;
uint16_t segment_sel;
uint8_t ist;
uint8_t attributes;
uint16_t offset_high;
uint32_t offset_higher;
uint32_t reserved;
} __attribute((packed))idt_descriptor;
typedef struct idt_register {
uint16_t limit;
uint64_t base_address;
} __attribute((packed)) idt_register;
typedef struct interrupt_frame {
uint64_t r15, r14, r13, r12, r11, r10, r9, r8, rdi, rsi, rbp, rdx, rcx, rbx, rax;
uint64_t int_no, err;
uint64_t rip, cs, rflags, rsp, ss;
} __attribute((packed)) interrupt_frame;
typedef struct irq_t {
void *base;
bool in_use;
}irq_t;
void set_idt_descriptor(uint8_t vector, void *base, uint8_t flags);
kstatus register_irq_vector(uint8_t vector, void *base, uint8_t flags);
int register_irq(void *base, uint8_t flags);
void set_idt(void);

62
src/hal/ioapic.c Normal file
View file

@ -0,0 +1,62 @@
#include <stdint.h>
#include <stdio.h>
#include <SFB25.h>
#include "../sys/acpi.h"
#include "error.h"
#define IOREGSEL 0x0
#define IOWIN 0x10
#define IOAPICID 0x0
#define IOAPICVER 0x1
#define IOAPICARB 0x2
#define IOREDTBL(x) (0x10 + (x * 2)) // 0-23 registers
extern uint64_t hhdmoffset;
extern madt_t *madt;
uint64_t ioapic_address;
void ioapic_write_reg(uint8_t reg, uint32_t data){
/* First we load IOREGSEL with the register we want to access */
*(uint32_t*)(ioapic_address + IOREGSEL) = (uint8_t) reg;
/* Then we write the data to the IOWIN register */
*(uint32_t*)(ioapic_address + IOWIN) = (uint32_t) data;
}
uint32_t ioapic_read_reg(uint8_t reg){
*(uint32_t*)(ioapic_address + IOREGSEL) = reg;
return *(uint32_t*)(ioapic_address + IOWIN);
}
void write_redir_entry(uint8_t reg, uint64_t data){
/* First write lower 32-bits of the data to the specified IOREDTBL register */
ioapic_write_reg(IOREDTBL(reg), (uint32_t)(data & 0xFFFFFFFF));
/* Then write the upper 32-bits */
ioapic_write_reg(IOREDTBL(reg)+1, (uint32_t)(data >> 32));
}
kstatus set_redir_entry(uint8_t pin, uint8_t vector, uint8_t delivery, uint8_t trigger, uint8_t destination_field, uint8_t destination_mode){
uint64_t data = ((uint64_t)destination_field << 56) | (uint64_t)trigger << 15 | (uint64_t)destination_mode << 11 | (uint64_t)delivery << 8 | vector;
write_redir_entry(pin, data);
return KERNEL_STATUS_SUCCESS;
}
void ioapic_init(void){
ioapic_t *ioapic = (ioapic_t*) find_ics(0x1);
if(!ioapic){
klog(LOG_ERROR, __func__, "IOAPIC ICS not found\n");
kkill();
}
ioapic_address = ioapic->ioapic_address + hhdmoffset;
}

13
src/hal/ioapic.h Normal file
View file

@ -0,0 +1,13 @@
#include "error.h"
#include <stdint.h>
void ioapic_init(void);
void write_redir_entry(uint8_t reg, uint64_t data);
kstatus set_redir_entry(uint8_t pin, uint8_t vector, uint8_t delivery, uint8_t trigger, uint8_t destination_field, uint8_t destination_mode);
#define IOREGSEL 0x0
#define IOWIN 0x10
#define IOAPICID 0x0
#define IOAPICVER 0x1
#define IOAPICARB 0x2
#define IOREDTBL(x) (0x10 + (x * 2)) // 0-23 registers

112
src/hal/smp.c Normal file
View file

@ -0,0 +1,112 @@
#include <limine.h>
#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>
#include <SFB25.h>
#include "gdt.h"
#include "smp.h"
#include "apic.h"
#include "idt.h"
#include "../mm/vmm.h"
#include <lock.h>
#include <io.h>
#include <string.h>
static volatile struct limine_smp_request smp_request = {
.id = LIMINE_SMP_REQUEST,
.revision = 0,
};
extern void s_load_idt();
extern void s_load_gdt();
extern uint64_t hhdmoffset;
/* Returns the CPU structure for this particular CPU */
cpu_state *get_cpu_struct(){
return (cpu_state*)rdmsr(GSBASE);
}
uint64_t get_cpu_count(){
return smp_request.response->cpu_count;
}
bool get_cpu_struct_initialized(){
if(rdmsr(GSBASE) < hhdmoffset){
return false;
}
return true;
}
atomic_flag ap_init_lock = ATOMIC_FLAG_INIT;
void ap_init(struct limine_smp_info *smp_info){
acquire_lock(&ap_init_lock);
/* Load the GDT */
s_load_gdt();
/* Load the IDT */
s_load_idt();
/* Set the CR3 context */
extern uint64_t *kernel_page_map;
vmm_set_ctx(kernel_page_map);
asm volatile(
"movq %%cr3, %%rax\n\
movq %%rax, %%cr3\n"
: : : "rax"
);
cpu_state *cpu_struct = (cpu_state*)kmalloc(sizeof(cpu_state));
memset(cpu_struct, 0, sizeof(cpu_state));
cpu_struct->lapic_id = smp_info->lapic_id;
wrmsr(KERNELGSBASE, (uint64_t)cpu_struct);
wrmsr(GSBASE, (uint64_t)cpu_struct);
/* Initialize APIC & APIC timer */
ap_apic_init();
free_lock(&ap_init_lock);
for(;;);
scheduler_init();
}
void smp_init(){
if(!smp_request.response){
klog(LOG_ERROR, __func__, "Failed to get SMP request");
kkill();
}
struct limine_smp_response *smp_response = smp_request.response;
kprintf("smp: {d} CPUs\n", smp_response->cpu_count);
for(uint64_t i = 0; i < smp_response->cpu_count; i++){
/* Pointer to smp_info is passed in RDI by Limine, so no need to pass any arguments here */
smp_response->cpus[i]->goto_address = &ap_init;
}
/* -- Setup CPU structure for BSP -- */
/* Allocate CPU structure */
cpu_state *cpu_struct = (cpu_state*)kmalloc(sizeof(cpu_state));
cpu_struct->lapic_id = smp_response->cpus[0]->lapic_id;
wrmsr(KERNELGSBASE, (uint64_t)cpu_struct);
wrmsr(GSBASE, (uint64_t)cpu_struct);
/* If one of the APs has halted, then halt the BSP */
extern bool kernel_killed;
if(kernel_killed == true){
kkill();
}
}

23
src/hal/smp.h Normal file
View file

@ -0,0 +1,23 @@
#include <stdbool.h>
#include <stdint.h>
#include "../scheduler/sched.h"
#pragma once
#define GSBASE 0xC0000101
#define KERNELGSBASE 0xC0000102
typedef struct cpu_state {
uint32_t lapic_id;
uint64_t lapic_timer_ticks;
proc process_list[PROC_MAX];
proc *current_process;
uint16_t process_count;
context scheduler_context;
}__attribute((packed))cpu_state;
void smp_init();
cpu_state *get_cpu_struct();
uint64_t get_cpu_count();
bool get_cpu_struct_initialized();

24
src/hal/timer.c Normal file
View file

@ -0,0 +1,24 @@
#include "../sys/acpi.h"
#include "../hal/ioapic.h"
#include "../hal/apic.h"
#include "../drivers/pmt.h"
#include "timer.h"
#include <stdio.h>
#include <SFB25.h>
/* Determines which timer will be used for calibration */
int calibration_timer = -1;
void timer_init(void){
if(pmt_init() == -1){
klog(LOG_INFO, __func__, "PMT Timer not found, falling back");
/* Fall back to PIT */
}else{
calibration_timer = PMT;
}
}
void sleep(int ms){
/* Eventually fix this */
apic_sleep(ms);
}

11
src/hal/timer.h Normal file
View file

@ -0,0 +1,11 @@
#include <stdint.h>
enum USABLE_TIMERS {
HPET = 0,
PMT,
PIT,
};
void timer_init(void);
void apic_timer_handler(void);
void sleep(int ms);

77
src/hal/tsc.c Normal file
View file

@ -0,0 +1,77 @@
#include <cpuid.h>
#include <stdio.h>
#include <stdint.h>
#include "error.h"
#include "../drivers/pmt.h"
uint32_t core_crystal_clock = 0;
void enable_tsc(){
asm(".intel_syntax noprefix\n\
mov rax, cr4\n\
or rax, 0b10\n\
mov cr4, rax\n\
.att_syntax prefix");
}
void disable_tsc(){
asm(".intel_syntax noprefix\n\
mov rax, cr4\n\
and rax, 0xFFFFFFFFFFFFFFFD\n\
mov cr4, rax\n\
.att_syntax prefix");
}
uint64_t read_tsc(){
uint32_t eax, edx;
asm("rdtsc" :"=a"(eax),"=d"(edx));
return ((uint64_t)edx << 32 | eax);
}
kstatus tsc_init(){
uint32_t edx, unused;
/* Check if there is an invariant TSC */
__get_cpuid(0x80000007, &unused, &unused, &unused, &edx);
if((edx & (1 << 8)) == 0){
return KERNEL_STATUS_ERROR;
}
kprintf("tsc: Invariant TSC found\n");
/* Get the core crystal clock so we can determine TSC speed */
__get_cpuid(0x15, &unused, &unused, &core_crystal_clock, &unused);
if(core_crystal_clock != 0){
kprintf("cpuid 15h supported!\n");
/* Make it so that it ticks every millisecond */
core_crystal_clock *= 1000;
}else{
/* Calibrate using the PMT */
enable_tsc();
uint32_t read1 = read_tsc();
pmt_delay(1000);
uint32_t read2 = read_tsc();
disable_tsc();
core_crystal_clock = read2 - read1;
}
kprintf("Core crystal clock: {d}\n", core_crystal_clock);
enable_tsc();
return KERNEL_STATUS_SUCCESS;
}
uint64_t tsc_get_timestamp(){
if(core_crystal_clock == 0){
return 0;
}
uint64_t read = read_tsc();
return read / core_crystal_clock;
}

6
src/hal/tsc.h Normal file
View file

@ -0,0 +1,6 @@
#include "error.h"
#include <stdint.h>
kstatus tsc_init();
uint64_t tsc_get_timestamp();

18
src/include/SFB25.h Normal file
View file

@ -0,0 +1,18 @@
#include <stdint.h>
void kkill(void);
typedef char link_symbol_ptr[];
#define ALIGN_UP_MASK(x, mask) (((x) + (mask)) & ~(mask))
#define ALIGN_UP(x, val) ALIGN_UP_MASK(x, (typeof(x))(val) - 1)
#define ALIGN_DOWN_MASK(x, mask) ((x) & ~(mask))
#define ALIGN_DOWN(x, val) ALIGN_DOWN_MASK(x, (typeof(x))(val) - 1)
#define IS_ALIGNED_MASK(x, mask) (((x) & (mask)) == 0)
#define IS_ALIGNED(x, val) IS_ALIGNED_MASK(x, (typeof(x))(val) - 1)
#define PAGE_ROUND_UP(size) ALIGN_UP(size, PAGE_SIZE)
#define PAGE_ROUND_DOWN(size) ALIGN_DOWN(size, PAGE_SIZE)
void *kmalloc(uint64_t size);

14
src/include/error.h Normal file
View file

@ -0,0 +1,14 @@
#ifndef ERROR_H
#define ERROR_H
typedef enum {
/* Success */
KERNEL_STATUS_SUCCESS,
/* General error */
KERNEL_STATUS_ERROR,
} kstatus;
#endif

12
src/include/io.h Normal file
View file

@ -0,0 +1,12 @@
#include <stdint.h>
void outb(uint16_t port, uint8_t val);
void outw(uint16_t port, uint16_t val);
void outl(uint16_t port, uint32_t val);
uint8_t inb(uint16_t port);
uint16_t inw(uint16_t port);
uint32_t inl(uint16_t port);
void wrmsr(uint64_t msr, uint64_t value);
uint64_t rdmsr(uint64_t msr);

621
src/include/limine.h Normal file
View file

@ -0,0 +1,621 @@
/* BSD Zero Clause License */
/* Copyright (C) 2022-2024 mintsuki and contributors.
*
* Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
* SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
* OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
* CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef LIMINE_H
#define LIMINE_H 1
#ifdef __cplusplus
extern "C" {
#endif
#include <stdint.h>
/* Misc */
#ifdef LIMINE_NO_POINTERS
# define LIMINE_PTR(TYPE) uint64_t
#else
# define LIMINE_PTR(TYPE) TYPE
#endif
#ifdef __GNUC__
# define LIMINE_DEPRECATED __attribute__((__deprecated__))
# define LIMINE_DEPRECATED_IGNORE_START \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wdeprecated-declarations\"")
# define LIMINE_DEPRECATED_IGNORE_END \
_Pragma("GCC diagnostic pop")
#else
# define LIMINE_DEPRECATED
# define LIMINE_DEPRECATED_IGNORE_START
# define LIMINE_DEPRECATED_IGNORE_END
#endif
#define LIMINE_REQUESTS_START_MARKER \
uint64_t limine_requests_start_marker[4] = { 0xf6b8f4b39de7d1ae, 0xfab91a6940fcb9cf, \
0x785c6ed015d3e316, 0x181e920a7852b9d9 };
#define LIMINE_REQUESTS_END_MARKER \
uint64_t limine_requests_end_marker[2] = { 0xadc0e0531bb10d03, 0x9572709f31764c62 };
#define LIMINE_REQUESTS_DELIMITER LIMINE_REQUESTS_END_MARKER
#define LIMINE_BASE_REVISION(N) \
uint64_t limine_base_revision[3] = { 0xf9562b2d5c95a6c8, 0x6a7b384944536bdc, (N) };
#define LIMINE_BASE_REVISION_SUPPORTED (limine_base_revision[2] == 0)
#define LIMINE_COMMON_MAGIC 0xc7b1dd30df4c8b88, 0x0a82e883a194f07b
struct limine_uuid {
uint32_t a;
uint16_t b;
uint16_t c;
uint8_t d[8];
};
#define LIMINE_MEDIA_TYPE_GENERIC 0
#define LIMINE_MEDIA_TYPE_OPTICAL 1
#define LIMINE_MEDIA_TYPE_TFTP 2
struct limine_file {
uint64_t revision;
LIMINE_PTR(void *) address;
uint64_t size;
LIMINE_PTR(char *) path;
LIMINE_PTR(char *) cmdline;
uint32_t media_type;
uint32_t unused;
uint32_t tftp_ip;
uint32_t tftp_port;
uint32_t partition_index;
uint32_t mbr_disk_id;
struct limine_uuid gpt_disk_uuid;
struct limine_uuid gpt_part_uuid;
struct limine_uuid part_uuid;
};
/* Boot info */
#define LIMINE_BOOTLOADER_INFO_REQUEST { LIMINE_COMMON_MAGIC, 0xf55038d8e2a1202f, 0x279426fcf5f59740 }
struct limine_bootloader_info_response {
uint64_t revision;
LIMINE_PTR(char *) name;
LIMINE_PTR(char *) version;
};
struct limine_bootloader_info_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_bootloader_info_response *) response;
};
/* Firmware type */
#define LIMINE_FIRMWARE_TYPE_REQUEST { LIMINE_COMMON_MAGIC, 0x8c2f75d90bef28a8, 0x7045a4688eac00c3 }
#define LIMINE_FIRMWARE_TYPE_X86BIOS 0
#define LIMINE_FIRMWARE_TYPE_UEFI32 1
#define LIMINE_FIRMWARE_TYPE_UEFI64 2
struct limine_firmware_type_response {
uint64_t revision;
uint64_t firmware_type;
};
struct limine_firmware_type_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_firmware_type_response *) response;
};
/* Stack size */
#define LIMINE_STACK_SIZE_REQUEST { LIMINE_COMMON_MAGIC, 0x224ef0460a8e8926, 0xe1cb0fc25f46ea3d }
struct limine_stack_size_response {
uint64_t revision;
};
struct limine_stack_size_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_stack_size_response *) response;
uint64_t stack_size;
};
/* HHDM */
#define LIMINE_HHDM_REQUEST { LIMINE_COMMON_MAGIC, 0x48dcf1cb8ad2b852, 0x63984e959a98244b }
struct limine_hhdm_response {
uint64_t revision;
uint64_t offset;
};
struct limine_hhdm_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_hhdm_response *) response;
};
/* Framebuffer */
#define LIMINE_FRAMEBUFFER_REQUEST { LIMINE_COMMON_MAGIC, 0x9d5827dcd881dd75, 0xa3148604f6fab11b }
#define LIMINE_FRAMEBUFFER_RGB 1
struct limine_video_mode {
uint64_t pitch;
uint64_t width;
uint64_t height;
uint16_t bpp;
uint8_t memory_model;
uint8_t red_mask_size;
uint8_t red_mask_shift;
uint8_t green_mask_size;
uint8_t green_mask_shift;
uint8_t blue_mask_size;
uint8_t blue_mask_shift;
};
struct limine_framebuffer {
LIMINE_PTR(void *) address;
uint64_t width;
uint64_t height;
uint64_t pitch;
uint16_t bpp;
uint8_t memory_model;
uint8_t red_mask_size;
uint8_t red_mask_shift;
uint8_t green_mask_size;
uint8_t green_mask_shift;
uint8_t blue_mask_size;
uint8_t blue_mask_shift;
uint8_t unused[7];
uint64_t edid_size;
LIMINE_PTR(void *) edid;
/* Response revision 1 */
uint64_t mode_count;
LIMINE_PTR(struct limine_video_mode **) modes;
};
struct limine_framebuffer_response {
uint64_t revision;
uint64_t framebuffer_count;
LIMINE_PTR(struct limine_framebuffer **) framebuffers;
};
struct limine_framebuffer_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_framebuffer_response *) response;
};
/* Terminal */
#define LIMINE_TERMINAL_REQUEST { LIMINE_COMMON_MAGIC, 0xc8ac59310c2b0844, 0xa68d0c7265d38878 }
#define LIMINE_TERMINAL_CB_DEC 10
#define LIMINE_TERMINAL_CB_BELL 20
#define LIMINE_TERMINAL_CB_PRIVATE_ID 30
#define LIMINE_TERMINAL_CB_STATUS_REPORT 40
#define LIMINE_TERMINAL_CB_POS_REPORT 50
#define LIMINE_TERMINAL_CB_KBD_LEDS 60
#define LIMINE_TERMINAL_CB_MODE 70
#define LIMINE_TERMINAL_CB_LINUX 80
#define LIMINE_TERMINAL_CTX_SIZE ((uint64_t)(-1))
#define LIMINE_TERMINAL_CTX_SAVE ((uint64_t)(-2))
#define LIMINE_TERMINAL_CTX_RESTORE ((uint64_t)(-3))
#define LIMINE_TERMINAL_FULL_REFRESH ((uint64_t)(-4))
/* Response revision 1 */
#define LIMINE_TERMINAL_OOB_OUTPUT_GET ((uint64_t)(-10))
#define LIMINE_TERMINAL_OOB_OUTPUT_SET ((uint64_t)(-11))
#define LIMINE_TERMINAL_OOB_OUTPUT_OCRNL (1 << 0)
#define LIMINE_TERMINAL_OOB_OUTPUT_OFDEL (1 << 1)
#define LIMINE_TERMINAL_OOB_OUTPUT_OFILL (1 << 2)
#define LIMINE_TERMINAL_OOB_OUTPUT_OLCUC (1 << 3)
#define LIMINE_TERMINAL_OOB_OUTPUT_ONLCR (1 << 4)
#define LIMINE_TERMINAL_OOB_OUTPUT_ONLRET (1 << 5)
#define LIMINE_TERMINAL_OOB_OUTPUT_ONOCR (1 << 6)
#define LIMINE_TERMINAL_OOB_OUTPUT_OPOST (1 << 7)
LIMINE_DEPRECATED_IGNORE_START
struct LIMINE_DEPRECATED limine_terminal;
typedef void (*limine_terminal_write)(struct limine_terminal *, const char *, uint64_t);
typedef void (*limine_terminal_callback)(struct limine_terminal *, uint64_t, uint64_t, uint64_t, uint64_t);
struct LIMINE_DEPRECATED limine_terminal {
uint64_t columns;
uint64_t rows;
LIMINE_PTR(struct limine_framebuffer *) framebuffer;
};
struct LIMINE_DEPRECATED limine_terminal_response {
uint64_t revision;
uint64_t terminal_count;
LIMINE_PTR(struct limine_terminal **) terminals;
LIMINE_PTR(limine_terminal_write) write;
};
struct LIMINE_DEPRECATED limine_terminal_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_terminal_response *) response;
LIMINE_PTR(limine_terminal_callback) callback;
};
LIMINE_DEPRECATED_IGNORE_END
/* Paging mode */
#define LIMINE_PAGING_MODE_REQUEST { LIMINE_COMMON_MAGIC, 0x95c1a0edab0944cb, 0xa4e5cb3842f7488a }
#if defined (__x86_64__) || defined (__i386__)
#define LIMINE_PAGING_MODE_X86_64_4LVL 0
#define LIMINE_PAGING_MODE_X86_64_5LVL 1
#define LIMINE_PAGING_MODE_MIN LIMINE_PAGING_MODE_X86_64_4LVL
#define LIMINE_PAGING_MODE_DEFAULT LIMINE_PAGING_MODE_X86_64_4LVL
#elif defined (__aarch64__)
#define LIMINE_PAGING_MODE_AARCH64_4LVL 0
#define LIMINE_PAGING_MODE_AARCH64_5LVL 1
#define LIMINE_PAGING_MODE_MIN LIMINE_PAGING_MODE_AARCH64_4LVL
#define LIMINE_PAGING_MODE_DEFAULT LIMINE_PAGING_MODE_AARCH64_4LVL
#elif defined (__riscv) && (__riscv_xlen == 64)
#define LIMINE_PAGING_MODE_RISCV_SV39 0
#define LIMINE_PAGING_MODE_RISCV_SV48 1
#define LIMINE_PAGING_MODE_RISCV_SV57 2
#define LIMINE_PAGING_MODE_MIN LIMINE_PAGING_MODE_RISCV_SV39
#define LIMINE_PAGING_MODE_DEFAULT LIMINE_PAGING_MODE_RISCV_SV48
#elif defined (__loongarch__) && (__loongarch_grlen == 64)
#define LIMINE_PAGING_MODE_LOONGARCH64_4LVL 0
#define LIMINE_PAGING_MODE_MIN LIMINE_PAGING_MODE_LOONGARCH64_4LVL
#define LIMINE_PAGING_MODE_DEFAULT LIMINE_PAGING_MODE_LOONGARCH64_4LVL
#else
#error Unknown architecture
#endif
struct limine_paging_mode_response {
uint64_t revision;
uint64_t mode;
};
struct limine_paging_mode_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_paging_mode_response *) response;
uint64_t mode;
uint64_t max_mode;
uint64_t min_mode;
};
/* 5-level paging */
#define LIMINE_5_LEVEL_PAGING_REQUEST { LIMINE_COMMON_MAGIC, 0x94469551da9b3192, 0xebe5e86db7382888 }
LIMINE_DEPRECATED_IGNORE_START
struct LIMINE_DEPRECATED limine_5_level_paging_response {
uint64_t revision;
};
struct LIMINE_DEPRECATED limine_5_level_paging_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_5_level_paging_response *) response;
};
LIMINE_DEPRECATED_IGNORE_END
/* SMP */
#define LIMINE_SMP_REQUEST { LIMINE_COMMON_MAGIC, 0x95a67b819a1b857e, 0xa0b61b723b6a73e0 }
struct limine_smp_info;
typedef void (*limine_goto_address)(struct limine_smp_info *);
#if defined (__x86_64__) || defined (__i386__)
#define LIMINE_SMP_X2APIC (1 << 0)
struct limine_smp_info {
uint32_t processor_id;
uint32_t lapic_id;
uint64_t reserved;
LIMINE_PTR(limine_goto_address) goto_address;
uint64_t extra_argument;
};
struct limine_smp_response {
uint64_t revision;
uint32_t flags;
uint32_t bsp_lapic_id;
uint64_t cpu_count;
LIMINE_PTR(struct limine_smp_info **) cpus;
};
#elif defined (__aarch64__)
struct limine_smp_info {
uint32_t processor_id;
uint32_t reserved1;
uint64_t mpidr;
uint64_t reserved;
LIMINE_PTR(limine_goto_address) goto_address;
uint64_t extra_argument;
};
struct limine_smp_response {
uint64_t revision;
uint64_t flags;
uint64_t bsp_mpidr;
uint64_t cpu_count;
LIMINE_PTR(struct limine_smp_info **) cpus;
};
#elif defined (__riscv) && (__riscv_xlen == 64)
struct limine_smp_info {
uint64_t processor_id;
uint64_t hartid;
uint64_t reserved;
LIMINE_PTR(limine_goto_address) goto_address;
uint64_t extra_argument;
};
struct limine_smp_response {
uint64_t revision;
uint64_t flags;
uint64_t bsp_hartid;
uint64_t cpu_count;
LIMINE_PTR(struct limine_smp_info **) cpus;
};
#elif defined (__loongarch__) && (__loongarch_grlen == 64)
struct limine_smp_info {
uint64_t reserved;
};
struct limine_smp_response {
uint64_t cpu_count;
LIMINE_PTR(struct limine_smp_info **) cpus;
};
#else
#error Unknown architecture
#endif
struct limine_smp_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_smp_response *) response;
uint64_t flags;
};
/* Memory map */
#define LIMINE_MEMMAP_REQUEST { LIMINE_COMMON_MAGIC, 0x67cf3d9d378a806f, 0xe304acdfc50c3c62 }
#define LIMINE_MEMMAP_USABLE 0
#define LIMINE_MEMMAP_RESERVED 1
#define LIMINE_MEMMAP_ACPI_RECLAIMABLE 2
#define LIMINE_MEMMAP_ACPI_NVS 3
#define LIMINE_MEMMAP_BAD_MEMORY 4
#define LIMINE_MEMMAP_BOOTLOADER_RECLAIMABLE 5
#define LIMINE_MEMMAP_KERNEL_AND_MODULES 6
#define LIMINE_MEMMAP_FRAMEBUFFER 7
struct limine_memmap_entry {
uint64_t base;
uint64_t length;
uint64_t type;
};
struct limine_memmap_response {
uint64_t revision;
uint64_t entry_count;
LIMINE_PTR(struct limine_memmap_entry **) entries;
};
struct limine_memmap_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_memmap_response *) response;
};
/* Entry point */
#define LIMINE_ENTRY_POINT_REQUEST { LIMINE_COMMON_MAGIC, 0x13d86c035a1cd3e1, 0x2b0caa89d8f3026a }
typedef void (*limine_entry_point)(void);
struct limine_entry_point_response {
uint64_t revision;
};
struct limine_entry_point_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_entry_point_response *) response;
LIMINE_PTR(limine_entry_point) entry;
};
/* Kernel File */
#define LIMINE_KERNEL_FILE_REQUEST { LIMINE_COMMON_MAGIC, 0xad97e90e83f1ed67, 0x31eb5d1c5ff23b69 }
struct limine_kernel_file_response {
uint64_t revision;
LIMINE_PTR(struct limine_file *) kernel_file;
};
struct limine_kernel_file_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_kernel_file_response *) response;
};
/* Module */
#define LIMINE_MODULE_REQUEST { LIMINE_COMMON_MAGIC, 0x3e7e279702be32af, 0xca1c4f3bd1280cee }
#define LIMINE_INTERNAL_MODULE_REQUIRED (1 << 0)
#define LIMINE_INTERNAL_MODULE_COMPRESSED (1 << 1)
struct limine_internal_module {
LIMINE_PTR(const char *) path;
LIMINE_PTR(const char *) cmdline;
uint64_t flags;
};
struct limine_module_response {
uint64_t revision;
uint64_t module_count;
LIMINE_PTR(struct limine_file **) modules;
};
struct limine_module_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_module_response *) response;
/* Request revision 1 */
uint64_t internal_module_count;
LIMINE_PTR(struct limine_internal_module **) internal_modules;
};
/* RSDP */
#define LIMINE_RSDP_REQUEST { LIMINE_COMMON_MAGIC, 0xc5e77b6b397e7b43, 0x27637845accdcf3c }
struct limine_rsdp_response {
uint64_t revision;
LIMINE_PTR(void *) address;
};
struct limine_rsdp_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_rsdp_response *) response;
};
/* SMBIOS */
#define LIMINE_SMBIOS_REQUEST { LIMINE_COMMON_MAGIC, 0x9e9046f11e095391, 0xaa4a520fefbde5ee }
struct limine_smbios_response {
uint64_t revision;
LIMINE_PTR(void *) entry_32;
LIMINE_PTR(void *) entry_64;
};
struct limine_smbios_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_smbios_response *) response;
};
/* EFI system table */
#define LIMINE_EFI_SYSTEM_TABLE_REQUEST { LIMINE_COMMON_MAGIC, 0x5ceba5163eaaf6d6, 0x0a6981610cf65fcc }
struct limine_efi_system_table_response {
uint64_t revision;
LIMINE_PTR(void *) address;
};
struct limine_efi_system_table_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_efi_system_table_response *) response;
};
/* EFI memory map */
#define LIMINE_EFI_MEMMAP_REQUEST { LIMINE_COMMON_MAGIC, 0x7df62a431d6872d5, 0xa4fcdfb3e57306c8 }
struct limine_efi_memmap_response {
uint64_t revision;
LIMINE_PTR(void *) memmap;
uint64_t memmap_size;
uint64_t desc_size;
uint64_t desc_version;
};
struct limine_efi_memmap_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_efi_memmap_response *) response;
};
/* Boot time */
#define LIMINE_BOOT_TIME_REQUEST { LIMINE_COMMON_MAGIC, 0x502746e184c088aa, 0xfbc5ec83e6327893 }
struct limine_boot_time_response {
uint64_t revision;
int64_t boot_time;
};
struct limine_boot_time_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_boot_time_response *) response;
};
/* Kernel address */
#define LIMINE_KERNEL_ADDRESS_REQUEST { LIMINE_COMMON_MAGIC, 0x71ba76863cc55f63, 0xb2644a48c516a487 }
struct limine_kernel_address_response {
uint64_t revision;
uint64_t physical_base;
uint64_t virtual_base;
};
struct limine_kernel_address_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_kernel_address_response *) response;
};
/* Device Tree Blob */
#define LIMINE_DTB_REQUEST { LIMINE_COMMON_MAGIC, 0xb40ddb48fb54bac7, 0x545081493f81ffb7 }
struct limine_dtb_response {
uint64_t revision;
LIMINE_PTR(void *) dtb_ptr;
};
struct limine_dtb_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_dtb_response *) response;
};
#ifdef __cplusplus
}
#endif
#endif

9
src/include/lock.h Normal file
View file

@ -0,0 +1,9 @@
#include <stdatomic.h>
#ifndef SPINLOCK_H
#define SPINLOCK_H
void acquire_lock(atomic_flag *lock);
void free_lock(atomic_flag *lock);
#endif

41
src/include/stdio.h Normal file
View file

@ -0,0 +1,41 @@
#include <stdint.h>
#include "../flanterm/src/flanterm.h"
enum {
LOG_INFO = 0,
LOG_WARN,
LOG_ERROR,
LOG_SUCCESS,
};
void klog(int level, const char *func, const char *msg);
int kprintf(const char *format_string, ...);
int serial_kprintf(const char *format_string, ...);
void print_char(struct flanterm_context *ft_ctx, char c);
void print_str(struct flanterm_context *ft_ctx, char *str);
void print_int(struct flanterm_context *ft_ctx, uint64_t i);
void print_hex(struct flanterm_context *ft_ctx, uint64_t num);
void print_bin(struct flanterm_context *ft_ctx, uint64_t num);
void serial_print_char(char c);
void serial_print_int(uint64_t i);
void serial_print_hex(uint64_t num);
void serial_print_bin(uint64_t num);
void kernel_framebuffer_print(char *buffer, size_t n);
void kernel_serial_print(char *buffer, size_t n);
char toupper(char c);
char dtoc(int digit);
#define ANSI_COLOR_RED "\x1b[31m"
#define ANSI_COLOR_GREEN "\x1b[32m"
#define ANSI_COLOR_YELLOW "\x1b[33m"
#define ANSI_COLOR_BLUE "\x1b[34m"
#define ANSI_COLOR_MAGENTA "\x1b[35m"
#define ANSI_COLOR_CYAN "\x1b[36m"
#define ANSI_COLOR_RESET "\x1b[0m"

16
src/include/string.h Normal file
View file

@ -0,0 +1,16 @@
#ifndef STRING_H
#define STRING_H
#include <stdint.h>
void *memset(void *addr, int c, uint64_t n);
void *memcpy(void *dest, void *src, uint64_t n);
void *memmove(void *dest, const void *src, uint64_t n);
int memcmp(const void *s1, const void *s2, uint64_t n);
uint64_t strlen(const char* str);
#endif

1430
src/include/uacpi/acpi.h Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,45 @@
#pragma once
#include <uacpi/types.h>
#ifdef __cplusplus
extern "C" {
#endif
/*
* Set the minimum log level to be accepted by the logging facilities. Any logs
* below this level are discarded and not passed to uacpi_kernel_log, etc.
*
* 0 is treated as a special value that resets the setting to the default value.
*
* E.g. for a log level of UACPI_LOG_INFO:
* UACPI_LOG_DEBUG -> discarded
* UACPI_LOG_TRACE -> discarded
* UACPI_LOG_INFO -> allowed
* UACPI_LOG_WARN -> allowed
* UACPI_LOG_ERROR -> allowed
*/
void uacpi_context_set_log_level(uacpi_log_level);
/*
* Set the maximum number of seconds a While loop is allowed to run for before
* getting timed out.
*
* 0 is treated a special value that resets the setting to the default value.
*/
void uacpi_context_set_loop_timeout(uacpi_u32 seconds);
/*
* Set the maximum call stack depth AML can reach before getting aborted.
*
* 0 is treated as a special value that resets the setting to the default value.
*/
void uacpi_context_set_max_call_stack_depth(uacpi_u32 depth);
uacpi_u32 uacpi_context_get_loop_timeout(void);
void uacpi_context_set_proactive_table_checksum(uacpi_bool);
#ifdef __cplusplus
}
#endif

282
src/include/uacpi/event.h Normal file
View file

@ -0,0 +1,282 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/uacpi.h>
#include <uacpi/acpi.h>
#ifdef __cplusplus
extern "C" {
#endif
typedef enum uacpi_fixed_event {
UACPI_FIXED_EVENT_TIMER_STATUS = 1,
UACPI_FIXED_EVENT_POWER_BUTTON,
UACPI_FIXED_EVENT_SLEEP_BUTTON,
UACPI_FIXED_EVENT_RTC,
UACPI_FIXED_EVENT_MAX = UACPI_FIXED_EVENT_RTC,
} uacpi_fixed_event;
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_install_fixed_event_handler(
uacpi_fixed_event event, uacpi_interrupt_handler handler, uacpi_handle user
))
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_uninstall_fixed_event_handler(
uacpi_fixed_event event
))
/*
* Enable/disable a fixed event. Note that the event is automatically enabled
* upon installing a handler to it.
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_enable_fixed_event(uacpi_fixed_event event)
)
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_disable_fixed_event(uacpi_fixed_event event)
)
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_clear_fixed_event(uacpi_fixed_event event)
)
typedef enum uacpi_event_info {
// Event is enabled in software
UACPI_EVENT_INFO_ENABLED = (1 << 0),
// Event is enabled in software (only for wake)
UACPI_EVENT_INFO_ENABLED_FOR_WAKE = (1 << 1),
// Event is masked
UACPI_EVENT_INFO_MASKED = (1 << 2),
// Event has a handler attached
UACPI_EVENT_INFO_HAS_HANDLER = (1 << 3),
// Hardware enable bit is set
UACPI_EVENT_INFO_HW_ENABLED = (1 << 4),
// Hardware status bit is set
UACPI_EVENT_INFO_HW_STATUS = (1 << 5),
} uacpi_event_info;
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_fixed_event_info(
uacpi_fixed_event event, uacpi_event_info *out_info
))
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_gpe_info(
uacpi_namespace_node *gpe_device, uacpi_u16 idx,
uacpi_event_info *out_info
))
// Set if the handler wishes to reenable the GPE it just handled
#define UACPI_GPE_REENABLE (1 << 7)
typedef uacpi_interrupt_ret (*uacpi_gpe_handler)(
uacpi_handle ctx, uacpi_namespace_node *gpe_device, uacpi_u16 idx
);
typedef enum uacpi_gpe_triggering {
UACPI_GPE_TRIGGERING_LEVEL = 0,
UACPI_GPE_TRIGGERING_EDGE = 1,
UACPI_GPE_TRIGGERING_MAX = UACPI_GPE_TRIGGERING_EDGE,
} uacpi_gpe_triggering;
const uacpi_char *uacpi_gpe_triggering_to_string(
uacpi_gpe_triggering triggering
);
/*
* Installs a handler to the provided GPE at 'idx' controlled by device
* 'gpe_device'. The GPE is automatically disabled & cleared according to the
* configured triggering upon invoking the handler. The event is optionally
* re-enabled (by returning UACPI_GPE_REENABLE from the handler)
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_install_gpe_handler(
uacpi_namespace_node *gpe_device, uacpi_u16 idx,
uacpi_gpe_triggering triggering, uacpi_gpe_handler handler, uacpi_handle ctx
))
/*
* Installs a raw handler to the provided GPE at 'idx' controlled by device
* 'gpe_device'. The handler is dispatched immediately after the event is
* received, status & enable bits are untouched.
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_install_gpe_handler_raw(
uacpi_namespace_node *gpe_device, uacpi_u16 idx,
uacpi_gpe_triggering triggering, uacpi_gpe_handler handler, uacpi_handle ctx
))
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_uninstall_gpe_handler(
uacpi_namespace_node *gpe_device, uacpi_u16 idx, uacpi_gpe_handler handler
))
/*
* Marks the GPE 'idx' managed by 'gpe_device' as wake-capable. 'wake_device' is
* optional and configures the GPE to generate an implicit notification whenever
* an event occurs.
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_setup_gpe_for_wake(
uacpi_namespace_node *gpe_device, uacpi_u16 idx,
uacpi_namespace_node *wake_device
))
/*
* Mark a GPE managed by 'gpe_device' as enabled/disabled for wake. The GPE must
* have previously been marked by calling uacpi_gpe_setup_for_wake. This
* function only affects the GPE enable register state following the call to
* uacpi_gpe_enable_all_for_wake.
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_enable_gpe_for_wake(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_disable_gpe_for_wake(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
/*
* Finalize GPE initialization by enabling all GPEs not configured for wake and
* having a matching AML handler detected.
*
* This should be called after the kernel power managment subsystem has
* enumerated all of the devices, executing their _PRW methods etc., and
* marking those it wishes to use for wake by calling uacpi_setup_gpe_for_wake
* or uacpi_mark_gpe_for_wake.
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_finalize_gpe_initialization(void)
)
/*
* Enable/disable a general purpose event managed by 'gpe_device'. Internally
* this uses reference counting to make sure a GPE is not disabled until all
* possible users of it do so. GPEs not marked for wake are enabled
* automatically so this API is only needed for wake events or those that don't
* have a corresponding AML handler.
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_enable_gpe(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_disable_gpe(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
/*
* Clear the status bit of the event 'idx' managed by 'gpe_device'.
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_clear_gpe(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
/*
* Suspend/resume a general purpose event managed by 'gpe_device'. This bypasses
* the reference counting mechanism and unconditionally clears/sets the
* corresponding bit in the enable registers. This is used for switching the GPE
* to poll mode.
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_suspend_gpe(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_resume_gpe(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
/*
* Finish handling the GPE managed by 'gpe_device' at 'idx'. This clears the
* status registers if it hasn't been cleared yet and re-enables the event if
* it was enabled before.
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_finish_handling_gpe(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
/*
* Hard mask/umask a general purpose event at 'idx' managed by 'gpe_device'.
* This is used to permanently silence an event so that further calls to
* enable/disable as well as suspend/resume get ignored. This might be necessary
* for GPEs that cause an event storm due to the kernel's inability to properly
* handle them. The only way to enable a masked event is by a call to unmask.
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_mask_gpe(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_unmask_gpe(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
/*
* Disable all GPEs currently set up on the system.
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_disable_all_gpes(void)
)
/*
* Enable all GPEs not marked as wake. This is only needed after the system
* wakes from a shallow sleep state and is called automatically by wake code.
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_enable_all_runtime_gpes(void)
)
/*
* Enable all GPEs marked as wake. This is only needed before the system goes
* to sleep is called automatically by sleep code.
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_enable_all_wake_gpes(void)
)
/*
* Install/uninstall a new GPE block, usually defined by a device in the
* namespace with a _HID of ACPI0006.
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_install_gpe_block(
uacpi_namespace_node *gpe_device, uacpi_u64 address,
uacpi_address_space address_space, uacpi_u16 num_registers,
uacpi_u32 irq
))
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_uninstall_gpe_block(
uacpi_namespace_node *gpe_device
))
#ifdef __cplusplus
}
#endif

View file

@ -0,0 +1,16 @@
#pragma once
#ifdef __cplusplus
#define UACPI_STATIC_ASSERT static_assert
#else
#define UACPI_STATIC_ASSERT _Static_assert
#endif
#define UACPI_BUILD_BUG_ON_WITH_MSG(expr, msg) UACPI_STATIC_ASSERT(!(expr), msg)
#define UACPI_BUILD_BUG_ON(expr) \
UACPI_BUILD_BUG_ON_WITH_MSG(expr, "BUILD BUG: " #expr " evaluated to true")
#define UACPI_EXPECT_SIZEOF(type, size) \
UACPI_BUILD_BUG_ON_WITH_MSG(sizeof(type) != size, \
"BUILD BUG: invalid type size")

View file

@ -0,0 +1,3 @@
#pragma once
#include <uacpi/platform/compiler.h>

View file

@ -0,0 +1,143 @@
#pragma once
#include <uacpi/acpi.h>
#include <uacpi/types.h>
#include <uacpi/uacpi.h>
#include <uacpi/internal/dynamic_array.h>
#include <uacpi/internal/shareable.h>
#include <uacpi/context.h>
struct uacpi_runtime_context {
/*
* A local copy of FADT that has been verified & converted to most optimal
* format for faster access to the registers.
*/
struct acpi_fadt fadt;
/*
* A cached pointer to FACS so that we don't have to look it up in interrupt
* contexts as we can't take mutexes.
*/
struct acpi_facs *facs;
/*
* pm1{a,b}_evt_blk split into two registers for convenience
*/
struct acpi_gas pm1a_status_blk;
struct acpi_gas pm1b_status_blk;
struct acpi_gas pm1a_enable_blk;
struct acpi_gas pm1b_enable_blk;
uacpi_u64 flags;
#define UACPI_SLEEP_TYP_INVALID 0xFF
uacpi_u8 last_sleep_typ_a;
uacpi_u8 last_sleep_typ_b;
uacpi_u8 s0_sleep_typ_a;
uacpi_u8 s0_sleep_typ_b;
/*
* This is a per-table value but we mimic the NT implementation:
* treat all other definition blocks as if they were the same revision
* as DSDT.
*/
uacpi_bool is_rev1;
uacpi_bool global_lock_acquired;
#ifndef UACPI_REDUCED_HARDWARE
uacpi_bool is_hardware_reduced;
uacpi_bool was_in_legacy_mode;
uacpi_bool has_global_lock;
uacpi_bool sci_handle_valid;
uacpi_handle sci_handle;
#endif
uacpi_u64 opcodes_executed;
uacpi_u32 loop_timeout_seconds;
uacpi_u32 max_call_stack_depth;
uacpi_u32 global_lock_seq_num;
/*
* These are stored here to protect against stuff like:
* - CopyObject(JUNK, \)
* - CopyObject(JUNK, \_GL)
*/
uacpi_mutex *global_lock_mutex;
uacpi_object *root_object;
#ifndef UACPI_REDUCED_HARDWARE
uacpi_handle *global_lock_event;
uacpi_handle *global_lock_spinlock;
uacpi_bool global_lock_pending;
#endif
uacpi_u8 log_level;
uacpi_u8 init_level;
};
static inline const uacpi_char *uacpi_init_level_to_string(uacpi_u8 lvl)
{
switch (lvl) {
case UACPI_INIT_LEVEL_EARLY:
return "early";
case UACPI_INIT_LEVEL_SUBSYSTEM_INITIALIZED:
return "subsystem initialized";
case UACPI_INIT_LEVEL_NAMESPACE_LOADED:
return "namespace loaded";
case UACPI_INIT_LEVEL_NAMESPACE_INITIALIZED:
return "namespace initialized";
default:
return "<invalid>";
}
}
#define UACPI_ENSURE_INIT_LEVEL_AT_LEAST(lvl) \
do { \
if (uacpi_unlikely(g_uacpi_rt_ctx.init_level < lvl)) { \
uacpi_error( \
"while evaluating %s: init level %d (%s) is too low, " \
"expected at least %d (%s)\n", __FUNCTION__, \
g_uacpi_rt_ctx.init_level, \
uacpi_init_level_to_string(g_uacpi_rt_ctx.init_level), lvl, \
uacpi_init_level_to_string(lvl) \
); \
return UACPI_STATUS_INIT_LEVEL_MISMATCH; \
} \
} while (0)
#define UACPI_ENSURE_INIT_LEVEL_IS(lvl) \
do { \
if (uacpi_unlikely(g_uacpi_rt_ctx.init_level != lvl)) { \
uacpi_error( \
"while evaluating %s: invalid init level %d (%s), " \
"expected %d (%s)\n", __FUNCTION__, \
g_uacpi_rt_ctx.init_level, \
uacpi_init_level_to_string(g_uacpi_rt_ctx.init_level), lvl, \
uacpi_init_level_to_string(lvl) \
); \
return UACPI_STATUS_INIT_LEVEL_MISMATCH; \
} \
} while (0)
extern struct uacpi_runtime_context g_uacpi_rt_ctx;
static inline uacpi_bool uacpi_check_flag(uacpi_u64 flag)
{
return (g_uacpi_rt_ctx.flags & flag) == flag;
}
static inline uacpi_bool uacpi_should_log(enum uacpi_log_level lvl)
{
return lvl <= g_uacpi_rt_ctx.log_level;
}
static inline uacpi_bool uacpi_is_hardware_reduced(void)
{
#ifndef UACPI_REDUCED_HARDWARE
return g_uacpi_rt_ctx.is_hardware_reduced;
#else
return UACPI_TRUE;
#endif
}

View file

@ -0,0 +1,147 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/internal/stdlib.h>
#include <uacpi/kernel_api.h>
#define DYNAMIC_ARRAY_WITH_INLINE_STORAGE(name, type, inline_capacity) \
struct name { \
type inline_storage[inline_capacity]; \
type *dynamic_storage; \
uacpi_size dynamic_capacity; \
uacpi_size size_including_inline; \
}; \
#define DYNAMIC_ARRAY_SIZE(arr) ((arr)->size_including_inline)
#define DYNAMIC_ARRAY_WITH_INLINE_STORAGE_EXPORTS(name, type, prefix) \
prefix uacpi_size name##_inline_capacity(struct name *arr); \
prefix type *name##_at(struct name *arr, uacpi_size idx); \
prefix type *name##_alloc(struct name *arr); \
prefix type *name##_calloc(struct name *arr); \
prefix void name##_pop(struct name *arr); \
prefix uacpi_size name##_size(struct name *arr); \
prefix type *name##_last(struct name *arr) \
prefix void name##_clear(struct name *arr);
#define DYNAMIC_ARRAY_WITH_INLINE_STORAGE_IMPL(name, type, prefix) \
UACPI_MAYBE_UNUSED \
prefix uacpi_size name##_inline_capacity(struct name *arr) \
{ \
return sizeof(arr->inline_storage) / sizeof(arr->inline_storage[0]); \
} \
\
UACPI_MAYBE_UNUSED \
prefix uacpi_size name##_capacity(struct name *arr) \
{ \
return name##_inline_capacity(arr) + arr->dynamic_capacity; \
} \
\
prefix type *name##_at(struct name *arr, uacpi_size idx) \
{ \
if (idx >= arr->size_including_inline) \
return UACPI_NULL; \
\
if (idx < name##_inline_capacity(arr)) \
return &arr->inline_storage[idx]; \
\
return &arr->dynamic_storage[idx - name##_inline_capacity(arr)]; \
} \
\
UACPI_MAYBE_UNUSED \
prefix type *name##_alloc(struct name *arr) \
{ \
uacpi_size inline_cap; \
type *out_ptr; \
\
inline_cap = name##_inline_capacity(arr); \
\
if (arr->size_including_inline >= inline_cap) { \
uacpi_size dynamic_size; \
\
dynamic_size = arr->size_including_inline - inline_cap; \
if (dynamic_size == arr->dynamic_capacity) { \
uacpi_size bytes, type_size; \
void *new_buf; \
\
type_size = sizeof(*arr->dynamic_storage); \
\
if (arr->dynamic_capacity == 0) { \
bytes = type_size * inline_cap; \
} else { \
bytes = (arr->dynamic_capacity / 2) * type_size; \
if (bytes == 0) \
bytes += type_size; \
\
bytes += arr->dynamic_capacity * type_size; \
} \
\
new_buf = uacpi_kernel_alloc(bytes); \
if (uacpi_unlikely(new_buf == UACPI_NULL)) \
return UACPI_NULL; \
\
arr->dynamic_capacity = bytes / type_size; \
\
if (arr->dynamic_storage) { \
uacpi_memcpy(new_buf, arr->dynamic_storage, \
dynamic_size * type_size); \
} \
uacpi_free(arr->dynamic_storage, dynamic_size * type_size); \
arr->dynamic_storage = new_buf; \
} \
\
out_ptr = &arr->dynamic_storage[dynamic_size]; \
goto ret; \
} \
\
\
out_ptr = &arr->inline_storage[arr->size_including_inline]; \
\
ret: \
arr->size_including_inline++; \
return out_ptr; \
} \
\
UACPI_MAYBE_UNUSED \
prefix type *name##_calloc(struct name *arr) \
{ \
type *ret; \
\
ret = name##_alloc(arr); \
if (ret) \
uacpi_memzero(ret, sizeof(*ret)); \
\
return ret; \
} \
\
UACPI_MAYBE_UNUSED \
prefix void name##_pop(struct name *arr) \
{ \
if (arr->size_including_inline == 0) \
return; \
\
arr->size_including_inline--; \
} \
\
UACPI_MAYBE_UNUSED \
prefix uacpi_size name##_size(struct name *arr) \
{ \
return arr->size_including_inline; \
} \
\
UACPI_MAYBE_UNUSED \
prefix type *name##_last(struct name *arr) \
{ \
return name##_at(arr, arr->size_including_inline - 1); \
} \
\
prefix void name##_clear(struct name *arr) \
{ \
uacpi_free( \
arr->dynamic_storage, \
arr->dynamic_capacity * sizeof(*arr->dynamic_storage) \
); \
arr->size_including_inline = 0; \
arr->dynamic_capacity = 0; \
arr->dynamic_storage = UACPI_NULL; \
}

View file

@ -0,0 +1,25 @@
#pragma once
#include <uacpi/event.h>
// This fixed event is internal-only, and we don't expose it in the enum
#define UACPI_FIXED_EVENT_GLOBAL_LOCK 0
UACPI_ALWAYS_OK_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_initialize_events_early(void)
)
UACPI_ALWAYS_OK_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_initialize_events(void)
)
UACPI_STUB_IF_REDUCED_HARDWARE(
void uacpi_deinitialize_events(void)
)
UACPI_STUB_IF_REDUCED_HARDWARE(
void uacpi_events_match_post_dynamic_table_load(void)
)
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_clear_all_events(void)
)

View file

@ -0,0 +1,7 @@
#pragma once
#include <uacpi/helpers.h>
#define UACPI_ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
#define UACPI_UNUSED(x) (void)(x)

View file

@ -0,0 +1,20 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/status.h>
#include <uacpi/internal/namespace.h>
enum uacpi_table_load_cause {
UACPI_TABLE_LOAD_CAUSE_LOAD_OP,
UACPI_TABLE_LOAD_CAUSE_LOAD_TABLE_OP,
UACPI_TABLE_LOAD_CAUSE_INIT,
UACPI_TABLE_LOAD_CAUSE_HOST,
};
uacpi_status uacpi_execute_table(void*, enum uacpi_table_load_cause cause);
uacpi_status uacpi_osi(uacpi_handle handle, uacpi_object *retval);
uacpi_status uacpi_execute_control_method(
uacpi_namespace_node *scope, uacpi_control_method *method,
const uacpi_object_array *args, uacpi_object **ret
);

View file

@ -0,0 +1,31 @@
#pragma once
#include <uacpi/internal/types.h>
#include <uacpi/acpi.h>
#include <uacpi/io.h>
uacpi_size uacpi_round_up_bits_to_bytes(uacpi_size bit_length);
void uacpi_read_buffer_field(
const uacpi_buffer_field *field, void *dst
);
void uacpi_write_buffer_field(
uacpi_buffer_field *field, const void *src, uacpi_size size
);
uacpi_status uacpi_read_field_unit(
uacpi_field_unit *field, void *dst, uacpi_size size
);
uacpi_status uacpi_write_field_unit(
uacpi_field_unit *field, const void *src, uacpi_size size
);
uacpi_status uacpi_system_io_read(
uacpi_io_addr address, uacpi_u8 width, uacpi_u64 *out
);
uacpi_status uacpi_system_io_write(
uacpi_io_addr address, uacpi_u8 width, uacpi_u64 in
);
uacpi_status uacpi_system_memory_read(void *ptr, uacpi_u8 width, uacpi_u64 *out);
uacpi_status uacpi_system_memory_write(void *ptr, uacpi_u8 width, uacpi_u64 in);

View file

@ -0,0 +1,22 @@
#pragma once
#include <uacpi/kernel_api.h>
#include <uacpi/internal/context.h>
#ifdef UACPI_FORMATTED_LOGGING
#define uacpi_log uacpi_kernel_log
#else
UACPI_PRINTF_DECL(2, 3)
void uacpi_log(uacpi_log_level, const uacpi_char*, ...);
#endif
#define uacpi_log_lvl(lvl, ...) \
do { if (uacpi_should_log(lvl)) uacpi_log(lvl, __VA_ARGS__); } while (0)
#define uacpi_debug(...) uacpi_log_lvl(UACPI_LOG_DEBUG, __VA_ARGS__)
#define uacpi_trace(...) uacpi_log_lvl(UACPI_LOG_TRACE, __VA_ARGS__)
#define uacpi_info(...) uacpi_log_lvl(UACPI_LOG_INFO, __VA_ARGS__)
#define uacpi_warn(...) uacpi_log_lvl(UACPI_LOG_WARN, __VA_ARGS__)
#define uacpi_error(...) uacpi_log_lvl(UACPI_LOG_ERROR, __VA_ARGS__)
void uacpi_logger_initialize(void);

View file

@ -0,0 +1,78 @@
#pragma once
#include <uacpi/internal/types.h>
#include <uacpi/kernel_api.h>
uacpi_bool uacpi_this_thread_owns_aml_mutex(uacpi_mutex*);
uacpi_status uacpi_acquire_aml_mutex(uacpi_mutex*, uacpi_u16 timeout);
uacpi_status uacpi_release_aml_mutex(uacpi_mutex*);
static inline uacpi_status uacpi_acquire_native_mutex(uacpi_handle mtx)
{
if (uacpi_unlikely(mtx == UACPI_NULL))
return UACPI_STATUS_INVALID_ARGUMENT;
return uacpi_kernel_acquire_mutex(mtx, 0xFFFF);
}
uacpi_status uacpi_acquire_native_mutex_with_timeout(
uacpi_handle mtx, uacpi_u16 timeout
);
static inline uacpi_status uacpi_release_native_mutex(uacpi_handle mtx)
{
if (uacpi_unlikely(mtx == UACPI_NULL))
return UACPI_STATUS_INVALID_ARGUMENT;
uacpi_kernel_release_mutex(mtx);
return UACPI_STATUS_OK;
}
static inline uacpi_status uacpi_acquire_native_mutex_may_be_null(
uacpi_handle mtx
)
{
if (mtx == UACPI_NULL)
return UACPI_STATUS_OK;
return uacpi_kernel_acquire_mutex(mtx, 0xFFFF);
}
static inline uacpi_status uacpi_release_native_mutex_may_be_null(
uacpi_handle mtx
)
{
if (mtx == UACPI_NULL)
return UACPI_STATUS_OK;
uacpi_kernel_release_mutex(mtx);
return UACPI_STATUS_OK;
}
struct uacpi_recursive_lock {
uacpi_handle mutex;
uacpi_size depth;
uacpi_thread_id owner;
};
uacpi_status uacpi_recursive_lock_init(struct uacpi_recursive_lock *lock);
uacpi_status uacpi_recursive_lock_deinit(struct uacpi_recursive_lock *lock);
uacpi_status uacpi_recursive_lock_acquire(struct uacpi_recursive_lock *lock);
uacpi_status uacpi_recursive_lock_release(struct uacpi_recursive_lock *lock);
struct uacpi_rw_lock {
uacpi_handle read_mutex;
uacpi_handle write_mutex;
uacpi_size num_readers;
};
uacpi_status uacpi_rw_lock_init(struct uacpi_rw_lock *lock);
uacpi_status uacpi_rw_lock_deinit(struct uacpi_rw_lock *lock);
uacpi_status uacpi_rw_lock_read(struct uacpi_rw_lock *lock);
uacpi_status uacpi_rw_unlock_read(struct uacpi_rw_lock *lock);
uacpi_status uacpi_rw_lock_write(struct uacpi_rw_lock *lock);
uacpi_status uacpi_rw_unlock_write(struct uacpi_rw_lock *lock);

View file

@ -0,0 +1,119 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/internal/shareable.h>
#include <uacpi/status.h>
#include <uacpi/namespace.h>
#define UACPI_NAMESPACE_NODE_FLAG_ALIAS (1 << 0)
/*
* This node has been uninstalled and has no object associated with it.
*
* This is used to handle edge cases where an object needs to reference
* a namespace node, where the node might end up going out of scope before
* the object lifetime ends.
*/
#define UACPI_NAMESPACE_NODE_FLAG_DANGLING (1u << 1)
/*
* This node is method-local and must not be exposed via public API as its
* lifetime is limited.
*/
#define UACPI_NAMESPACE_NODE_FLAG_TEMPORARY (1u << 2)
#define UACPI_NAMESPACE_NODE_PREDEFINED (1u << 31)
typedef struct uacpi_namespace_node {
struct uacpi_shareable shareable;
uacpi_object_name name;
uacpi_u32 flags;
uacpi_object *object;
struct uacpi_namespace_node *parent;
struct uacpi_namespace_node *child;
struct uacpi_namespace_node *next;
} uacpi_namespace_node;
uacpi_status uacpi_initialize_namespace(void);
void uacpi_deinitialize_namespace(void);
uacpi_namespace_node *uacpi_namespace_node_alloc(uacpi_object_name name);
void uacpi_namespace_node_unref(uacpi_namespace_node *node);
uacpi_status uacpi_namespace_node_type_unlocked(
const uacpi_namespace_node *node, uacpi_object_type *out_type
);
uacpi_status uacpi_namespace_node_is_one_of_unlocked(
const uacpi_namespace_node *node, uacpi_object_type_bits type_mask,
uacpi_bool *out
);
uacpi_object *uacpi_namespace_node_get_object(const uacpi_namespace_node *node);
uacpi_object *uacpi_namespace_node_get_object_typed(
const uacpi_namespace_node *node, uacpi_object_type_bits type_mask
);
uacpi_status uacpi_namespace_node_acquire_object(
const uacpi_namespace_node *node, uacpi_object **out_obj
);
uacpi_status uacpi_namespace_node_acquire_object_typed(
const uacpi_namespace_node *node, uacpi_object_type_bits,
uacpi_object **out_obj
);
uacpi_status uacpi_namespace_node_reacquire_object(
uacpi_object *obj
);
uacpi_status uacpi_namespace_node_release_object(
uacpi_object *obj
);
uacpi_status uacpi_namespace_node_install(
uacpi_namespace_node *parent, uacpi_namespace_node *node
);
uacpi_status uacpi_namespace_node_uninstall(uacpi_namespace_node *node);
uacpi_namespace_node *uacpi_namespace_node_find_sub_node(
uacpi_namespace_node *parent,
uacpi_object_name name
);
enum uacpi_may_search_above_parent {
UACPI_MAY_SEARCH_ABOVE_PARENT_NO,
UACPI_MAY_SEARCH_ABOVE_PARENT_YES,
};
enum uacpi_permanent_only {
UACPI_PERMANENT_ONLY_NO,
UACPI_PERMANENT_ONLY_YES,
};
enum uacpi_should_lock {
UACPI_SHOULD_LOCK_NO,
UACPI_SHOULD_LOCK_YES,
};
uacpi_status uacpi_namespace_node_resolve(
uacpi_namespace_node *scope, const uacpi_char *path, enum uacpi_should_lock,
enum uacpi_may_search_above_parent, enum uacpi_permanent_only,
uacpi_namespace_node **out_node
);
uacpi_status uacpi_namespace_do_for_each_child(
uacpi_namespace_node *parent, uacpi_iteration_callback descending_callback,
uacpi_iteration_callback ascending_callback,
uacpi_object_type_bits, uacpi_u32 max_depth, enum uacpi_should_lock,
enum uacpi_permanent_only, void *user
);
uacpi_bool uacpi_namespace_node_is_dangling(uacpi_namespace_node *node);
uacpi_bool uacpi_namespace_node_is_temporary(uacpi_namespace_node *node);
uacpi_bool uacpi_namespace_node_is_predefined(uacpi_namespace_node *node);
uacpi_status uacpi_namespace_read_lock(void);
uacpi_status uacpi_namespace_read_unlock(void);
uacpi_status uacpi_namespace_write_lock(void);
uacpi_status uacpi_namespace_write_unlock(void);

View file

@ -0,0 +1,9 @@
#pragma once
#include <uacpi/internal/types.h>
#include <uacpi/notify.h>
uacpi_status uacpi_initialize_notify(void);
void uacpi_deinitialize_notify(void);
uacpi_status uacpi_notify_all(uacpi_namespace_node *node, uacpi_u64 value);

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,42 @@
#pragma once
#include <uacpi/internal/types.h>
#include <uacpi/opregion.h>
uacpi_status uacpi_initialize_opregion(void);
void uacpi_deinitialize_opregion(void);
void uacpi_trace_region_error(
uacpi_namespace_node *node, uacpi_char *message, uacpi_status ret
);
void uacpi_trace_region_io(
uacpi_namespace_node *node, uacpi_address_space space, uacpi_region_op op,
uacpi_u64 offset, uacpi_u8 byte_size, uacpi_u64 ret
);
uacpi_status uacpi_install_address_space_handler_with_flags(
uacpi_namespace_node *device_node, enum uacpi_address_space space,
uacpi_region_handler handler, uacpi_handle handler_context,
uacpi_u16 flags
);
void uacpi_opregion_uninstall_handler(uacpi_namespace_node *node);
uacpi_bool uacpi_address_space_handler_is_default(
uacpi_address_space_handler *handler
);
uacpi_address_space_handlers *uacpi_node_get_address_space_handlers(
uacpi_namespace_node *node
);
uacpi_status uacpi_initialize_opregion_node(uacpi_namespace_node *node);
uacpi_status uacpi_opregion_attach(uacpi_namespace_node *node);
void uacpi_install_default_address_space_handlers(void);
uacpi_status uacpi_dispatch_opregion_io(
uacpi_namespace_node *region_node, uacpi_u32 offset, uacpi_u8 byte_width,
uacpi_region_op op, uacpi_u64 *in_out
);

View file

@ -0,0 +1,8 @@
#pragma once
#include <uacpi/osi.h>
uacpi_status uacpi_initialize_interfaces(void);
void uacpi_deinitialize_interfaces(void);
uacpi_status uacpi_handle_osi(const uacpi_char *string, uacpi_bool *out_value);

View file

@ -0,0 +1,54 @@
#pragma once
#include <uacpi/types.h>
uacpi_status uacpi_ininitialize_registers(void);
void uacpi_deininitialize_registers(void);
enum uacpi_register {
UACPI_REGISTER_PM1_STS = 0,
UACPI_REGISTER_PM1_EN,
UACPI_REGISTER_PM1_CNT,
UACPI_REGISTER_PM_TMR,
UACPI_REGISTER_PM2_CNT,
UACPI_REGISTER_SLP_CNT,
UACPI_REGISTER_SLP_STS,
UACPI_REGISTER_RESET,
UACPI_REGISTER_SMI_CMD,
UACPI_REGISTER_MAX = UACPI_REGISTER_SMI_CMD,
};
uacpi_status uacpi_read_register(enum uacpi_register, uacpi_u64*);
uacpi_status uacpi_write_register(enum uacpi_register, uacpi_u64);
uacpi_status uacpi_write_registers(enum uacpi_register, uacpi_u64, uacpi_u64);
enum uacpi_register_field {
UACPI_REGISTER_FIELD_TMR_STS = 0,
UACPI_REGISTER_FIELD_BM_STS,
UACPI_REGISTER_FIELD_GBL_STS,
UACPI_REGISTER_FIELD_PWRBTN_STS,
UACPI_REGISTER_FIELD_SLPBTN_STS,
UACPI_REGISTER_FIELD_RTC_STS,
UACPI_REGISTER_FIELD_PCIEX_WAKE_STS,
UACPI_REGISTER_FIELD_HWR_WAK_STS,
UACPI_REGISTER_FIELD_WAK_STS,
UACPI_REGISTER_FIELD_TMR_EN,
UACPI_REGISTER_FIELD_GBL_EN,
UACPI_REGISTER_FIELD_PWRBTN_EN,
UACPI_REGISTER_FIELD_SLPBTN_EN,
UACPI_REGISTER_FIELD_RTC_EN,
UACPI_REGISTER_FIELD_PCIEXP_WAKE_DIS,
UACPI_REGISTER_FIELD_SCI_EN,
UACPI_REGISTER_FIELD_BM_RLD,
UACPI_REGISTER_FIELD_GBL_RLS,
UACPI_REGISTER_FIELD_SLP_TYP,
UACPI_REGISTER_FIELD_HWR_SLP_TYP,
UACPI_REGISTER_FIELD_SLP_EN,
UACPI_REGISTER_FIELD_HWR_SLP_EN,
UACPI_REGISTER_FIELD_ARB_DIS,
UACPI_REGISTER_FIELD_MAX = UACPI_REGISTER_FIELD_ARB_DIS,
};
uacpi_status uacpi_read_register_field(enum uacpi_register_field, uacpi_u64*);
uacpi_status uacpi_write_register_field(enum uacpi_register_field, uacpi_u64);

View file

@ -0,0 +1,323 @@
#pragma once
#include <uacpi/internal/types.h>
#include <uacpi/resources.h>
enum uacpi_aml_resource {
UACPI_AML_RESOURCE_TYPE_INVALID = 0,
// Small resources
UACPI_AML_RESOURCE_IRQ,
UACPI_AML_RESOURCE_DMA,
UACPI_AML_RESOURCE_START_DEPENDENT,
UACPI_AML_RESOURCE_END_DEPENDENT,
UACPI_AML_RESOURCE_IO,
UACPI_AML_RESOURCE_FIXED_IO,
UACPI_AML_RESOURCE_FIXED_DMA,
UACPI_AML_RESOURCE_VENDOR_TYPE0,
UACPI_AML_RESOURCE_END_TAG,
// Large resources
UACPI_AML_RESOURCE_MEMORY24,
UACPI_AML_RESOURCE_GENERIC_REGISTER,
UACPI_AML_RESOURCE_VENDOR_TYPE1,
UACPI_AML_RESOURCE_MEMORY32,
UACPI_AML_RESOURCE_FIXED_MEMORY32,
UACPI_AML_RESOURCE_ADDRESS32,
UACPI_AML_RESOURCE_ADDRESS16,
UACPI_AML_RESOURCE_EXTENDED_IRQ,
UACPI_AML_RESOURCE_ADDRESS64,
UACPI_AML_RESOURCE_ADDRESS64_EXTENDED,
UACPI_AML_RESOURCE_GPIO_CONNECTION,
UACPI_AML_RESOURCE_PIN_FUNCTION,
UACPI_AML_RESOURCE_SERIAL_CONNECTION,
UACPI_AML_RESOURCE_PIN_CONFIGURATION,
UACPI_AML_RESOURCE_PIN_GROUP,
UACPI_AML_RESOURCE_PIN_GROUP_FUNCTION,
UACPI_AML_RESOURCE_PIN_GROUP_CONFIGURATION,
UACPI_AML_RESOURCE_CLOCK_INPUT,
UACPI_AML_RESOURCE_MAX = UACPI_AML_RESOURCE_CLOCK_INPUT,
};
enum uacpi_aml_resource_size_kind {
UACPI_AML_RESOURCE_SIZE_KIND_FIXED,
UACPI_AML_RESOURCE_SIZE_KIND_FIXED_OR_ONE_LESS,
UACPI_AML_RESOURCE_SIZE_KIND_VARIABLE,
};
enum uacpi_aml_resource_kind {
UACPI_AML_RESOURCE_KIND_SMALL = 0,
UACPI_AML_RESOURCE_KIND_LARGE,
};
enum uacpi_resource_convert_opcode {
UACPI_RESOURCE_CONVERT_OPCODE_END = 0,
/*
* AML -> native:
* Take the mask at 'aml_offset' and convert to an array of uacpi_u8
* at 'native_offset' with the value corresponding to the bit index.
* The array size is written to the byte at offset 'arg2'.
*
* native -> AML:
* Walk each element of the array at 'native_offset' and set the
* corresponding bit in the mask at 'aml_offset' to 1. The array size is
* read from the byte at offset 'arg2'.
*/
UACPI_RESOURCE_CONVERT_OPCODE_PACKED_ARRAY_8,
UACPI_RESOURCE_CONVERT_OPCODE_PACKED_ARRAY_16,
/*
* AML -> native:
* Grab the bits at the byte at 'aml_offset' + 'bit_index', and copy its
* value into the byte at 'native_offset'.
*
* native -> AML:
* Grab first N bits at 'native_offset' and copy to 'aml_offset' starting
* at the 'bit_index'.
*
* NOTE:
* These must be contiguous in this order.
*/
UACPI_RESOURCE_CONVERT_OPCODE_BIT_FIELD_1,
UACPI_RESOURCE_CONVERT_OPCODE_BIT_FIELD_2,
UACPI_RESOURCE_CONVERT_OPCODE_BIT_FIELD_3,
UACPI_RESOURCE_CONVERT_OPCODE_BIT_FIELD_6 =
UACPI_RESOURCE_CONVERT_OPCODE_BIT_FIELD_3 + 3,
/*
* AML -> native:
* Copy N bytes at 'aml_offset' to 'native_offset'.
*
* native -> AML:
* Copy N bytes at 'native_offset' to 'aml_offset'.
*
* 'imm' is added to the accumulator.
*
* NOTE: These are affected by the current value in the accumulator. If it's
* set to 0 at the time of evalution, this is executed once, N times
* otherwise. 0xFF is considered a special value, which resets the
* accumulator to 0 unconditionally.
*/
UACPI_RESOURCE_CONVERT_OPCODE_FIELD_8,
UACPI_RESOURCE_CONVERT_OPCODE_FIELD_16,
UACPI_RESOURCE_CONVERT_OPCODE_FIELD_32,
UACPI_RESOURCE_CONVERT_OPCODE_FIELD_64,
/*
* If the length of the current resource is less than 'arg0', then skip
* 'imm' instructions.
*/
UACPI_RESOURCE_CONVERT_OPCODE_SKIP_IF_AML_SIZE_LESS_THAN,
/*
* Skip 'imm' instructions if 'arg0' is not equal to the value in the
* accumulator.
*/
UACPI_RESOURCE_CONVERT_OPCODE_SKIP_IF_NOT_EQUALS,
/*
* AML -> native:
* Set the byte at 'native_offset' to 'imm'.
*
* native -> AML:
* Set the byte at 'aml_offset' to 'imm'.
*/
UACPI_RESOURCE_CONVERT_OPCODE_SET_TO_IMM,
/*
* AML -> native:
* Load the AML resoruce length into the accumulator as well as the field at
* 'native_offset' of width N.
*
* native -> AML:
* Load the resource length into the accumulator.
*/
UACPI_RESOURCE_CONVERT_OPCODE_LOAD_AML_SIZE_32,
/*
* AML -> native:
* Load the 8 bit field at 'aml_offset' into the accumulator and store at
* 'native_offset'.
*
* native -> AML:
* Load the 8 bit field at 'native_offset' into the accumulator and store
* at 'aml_offset'.
*
* The accumulator is multiplied by 'imm' unless it's set to zero.
*/
UACPI_RESOURCE_CONVERT_OPCODE_LOAD_8_STORE,
/*
* Load the N bit field at 'native_offset' into the accumulator
*/
UACPI_RESOURCE_CONVERT_OPCODE_LOAD_8_NATIVE,
UACPI_RESOURCE_CONVERT_OPCODE_LOAD_16_NATIVE,
/*
* Load 'imm' into the accumulator.
*/
UACPI_RESOURCE_CONVERT_OPCODE_LOAD_IMM,
/*
* AML -> native:
* Load the resource source at offset = aml size + accumulator into the
* uacpi_resource_source struct at 'native_offset'. The string bytes are
* written to the offset at resource size + accumulator. The presence is
* detected by comparing the length of the resource to the offset,
* 'arg2' optionally specifies the offset to the upper bound of the string.
*
* native -> AML:
* Load the resource source from the uacpi_resource_source struct at
* 'native_offset' to aml_size + accumulator. aml_size + accumulator is
* optionally written to 'aml_offset' if it's specified.
*/
UACPI_RESOURCE_CONVERT_OPCODE_RESOURCE_SOURCE,
UACPI_RESOURCE_CONVERT_OPCODE_RESOURCE_SOURCE_NO_INDEX,
UACPI_RESOURCE_CONVERT_OPCODE_RESOURCE_LABEL,
/*
* AML -> native:
* Load the pin table with upper bound specified at 'aml_offset'.
* The table length is calculated by subtracting the upper bound from
* aml_size and is written into the accumulator.
*
* native -> AML:
* Load the pin table length from 'native_offset' and multiply by 2, store
* the result in the accumulator.
*/
UACPI_RESOURCE_CONVERT_OPCODE_LOAD_PIN_TABLE_LENGTH,
/*
* AML -> native:
* Store the accumulator divided by 2 at 'native_offset'.
* The table is copied to the offset at resource size from offset at
* aml_size with the pointer written to the offset at 'arg2'.
*
* native -> AML:
* Read the pin table from resource size offset, write aml_size to
* 'aml_offset'. Copy accumulator bytes to the offset at aml_size.
*/
UACPI_RESOURCE_CONVERT_OPCODE_PIN_TABLE,
/*
* AML -> native:
* Load vendor data with offset stored at 'aml_offset'. The length is
* calculated as aml_size - aml_offset and is written to 'native_offset'.
* The data is written to offset - aml_size with the pointer written back
* to the offset at 'arg2'.
*
* native -> AML:
* Read vendor data from the pointer at offset 'arg2' and size at
* 'native_offset', the offset to write to is calculated as the difference
* between the data pointer and the native resource end pointer.
* offset + aml_size is written to 'aml_offset' and the data is copied
* there as well.
*/
UACPI_RESOURCE_CONVERT_OPCODE_VENDOR_DATA,
/*
* AML -> native:
* Read the serial type from the byte at 'aml_offset' and write it to the
* type field of the uacpi_resource_serial_bus_common structure. Convert
* the serial type to native and set the resource type to it. Copy the
* vendor data to the offset at native size, the length is calculated
* as type_data_length - extra-type-specific-size, and is written to
* vendor_data_length, as well as the accumulator. The data pointer is
* written to vendor_data.
*
* native -> AML:
* Set the serial type at 'aml_offset' to the value stored at
* 'native_offset'. Load the vendor data to the offset at aml_size,
* the length is read from 'vendor_data_length', and the data is copied from
* 'vendor_data'.
*/
UACPI_RESOURCE_CONVERT_OPCODE_SERIAL_TYPE_SPECIFIC,
/*
* Produces an error if encountered in the instruction stream.
* Used to trap invalid/unexpected code flow.
*/
UACPI_RESOURCE_CONVERT_OPCODE_UNREACHABLE,
};
struct uacpi_resource_convert_instruction {
uacpi_u8 code;
union {
uacpi_u8 aml_offset;
uacpi_u8 arg0;
};
union {
uacpi_u8 native_offset;
uacpi_u8 arg1;
};
union {
uacpi_u8 imm;
uacpi_u8 bit_index;
uacpi_u8 arg2;
};
};
struct uacpi_resource_spec {
uacpi_u8 type : 5;
uacpi_u8 native_type : 5;
uacpi_u8 resource_kind : 1;
uacpi_u8 size_kind : 2;
/*
* Size of the resource as appears in the AML byte stream, for variable
* length resources this is the minimum.
*/
uacpi_u16 aml_size;
/*
* Size of the native human-readable uacpi resource, for variable length
* resources this is the minimum. The final length is this field plus the
* result of extra_size_for_native().
*/
uacpi_u16 native_size;
/*
* Calculate the amount of extra bytes that must be allocated for a specific
* native resource given the AML counterpart. This being NULL means no extra
* bytes are needed, aka native resources is always the same size.
*/
uacpi_size (*extra_size_for_native)(
const struct uacpi_resource_spec*, void*, uacpi_size
);
/*
* Calculate the number of bytes needed to represent a native resource as
* AML. The 'aml_size' field is used if this is NULL.
*/
uacpi_size (*size_for_aml)(
const struct uacpi_resource_spec*, uacpi_resource*
);
const struct uacpi_resource_convert_instruction *to_native;
const struct uacpi_resource_convert_instruction *to_aml;
};
typedef uacpi_iteration_decision (*uacpi_aml_resource_iteration_callback)(
void*, uacpi_u8 *data, uacpi_u16 resource_size,
const struct uacpi_resource_spec*
);
uacpi_status uacpi_for_each_aml_resource(
uacpi_buffer *buffer, uacpi_aml_resource_iteration_callback cb, void *user
);
uacpi_status uacpi_find_aml_resource_end_tag(
uacpi_buffer *buffer, uacpi_size *out_offset
);
uacpi_status uacpi_native_resources_from_aml(
uacpi_buffer *aml_buffer, uacpi_resources **out_resources
);
uacpi_status uacpi_native_resources_to_aml(
uacpi_resources *resources, uacpi_object **out_template
);

View file

@ -0,0 +1,21 @@
#pragma once
#include <uacpi/types.h>
struct uacpi_shareable {
uacpi_u32 reference_count;
};
void uacpi_shareable_init(uacpi_handle);
uacpi_bool uacpi_bugged_shareable(uacpi_handle);
void uacpi_make_shareable_bugged(uacpi_handle);
uacpi_u32 uacpi_shareable_ref(uacpi_handle);
uacpi_u32 uacpi_shareable_unref(uacpi_handle);
void uacpi_shareable_unref_and_delete_if_last(
uacpi_handle, void (*do_free)(uacpi_handle)
);
uacpi_u32 uacpi_shareable_refcount(uacpi_handle);

View file

@ -0,0 +1,87 @@
#pragma once
#include <uacpi/internal/types.h>
#include <uacpi/internal/helpers.h>
#include <uacpi/platform/libc.h>
#include <uacpi/kernel_api.h>
#ifndef uacpi_memcpy
void *uacpi_memcpy(void *dest, const void *src, uacpi_size count);
#endif
#ifndef uacpi_memmove
void *uacpi_memmove(void *dest, const void *src, uacpi_size count);
#endif
#ifndef uacpi_memset
void *uacpi_memset(void *dest, uacpi_i32 ch, uacpi_size count);
#endif
#ifndef uacpi_memcmp
uacpi_i32 uacpi_memcmp(const void *lhs, const void *rhs, uacpi_size count);
#endif
#ifndef uacpi_strlen
uacpi_size uacpi_strlen(const uacpi_char *str);
#endif
#ifndef uacpi_strnlen
uacpi_size uacpi_strnlen(const uacpi_char *str, uacpi_size max);
#endif
#ifndef uacpi_strcmp
uacpi_i32 uacpi_strcmp(const uacpi_char *lhs, const uacpi_char *rhs);
#endif
#ifndef uacpi_snprintf
UACPI_PRINTF_DECL(3, 4)
uacpi_i32 uacpi_snprintf(
uacpi_char *buffer, uacpi_size capacity, const uacpi_char *fmt, ...
);
#endif
#ifndef uacpi_vsnprintf
uacpi_i32 uacpi_vsnprintf(
uacpi_char *buffer, uacpi_size capacity, const uacpi_char *fmt,
uacpi_va_list vlist
);
#endif
#ifdef UACPI_SIZED_FREES
#define uacpi_free(mem, size) uacpi_kernel_free(mem, size)
#else
#define uacpi_free(mem, _) uacpi_kernel_free(mem)
#endif
#define uacpi_memzero(ptr, size) uacpi_memset(ptr, 0, size)
#define UACPI_COMPARE(x, y, op) ((x) op (y) ? (x) : (y))
#define UACPI_MIN(x, y) UACPI_COMPARE(x, y, <)
#define UACPI_MAX(x, y) UACPI_COMPARE(x, y, >)
#define UACPI_ALIGN_UP_MASK(x, mask) (((x) + (mask)) & ~(mask))
#define UACPI_ALIGN_UP(x, val, type) UACPI_ALIGN_UP_MASK(x, (type)(val) - 1)
#define UACPI_ALIGN_DOWN_MASK(x, mask) ((x) & ~(mask))
#define UACPI_ALIGN_DOWN(x, val, type) UACPI_ALIGN_DOWN_MASK(x, (type)(val) - 1)
#define UACPI_IS_ALIGNED_MASK(x, mask) (((x) & (mask)) == 0)
#define UACPI_IS_ALIGNED(x, val, type) UACPI_IS_ALIGNED_MASK(x, (type)(val) - 1)
#define UACPI_IS_POWER_OF_TWO(x, type) UACPI_IS_ALIGNED(x, x, type)
void uacpi_memcpy_zerout(void *dst, const void *src,
uacpi_size dst_size, uacpi_size src_size);
// Returns the one-based bit location of LSb or 0
uacpi_u8 uacpi_bit_scan_forward(uacpi_u64);
// Returns the one-based bit location of MSb or 0
uacpi_u8 uacpi_bit_scan_backward(uacpi_u64);
uacpi_u8 uacpi_popcount(uacpi_u64);
#ifndef UACPI_NATIVE_ALLOC_ZEROED
void *uacpi_builtin_alloc_zeroed(uacpi_size size);
#define uacpi_kernel_alloc_zeroed uacpi_builtin_alloc_zeroed
#endif

View file

@ -0,0 +1,66 @@
#pragma once
#include <uacpi/internal/context.h>
#include <uacpi/internal/interpreter.h>
#include <uacpi/types.h>
#include <uacpi/status.h>
#include <uacpi/tables.h>
enum uacpi_table_origin {
UACPI_TABLE_ORIGIN_FIRMWARE_VIRTUAL = 0,
UACPI_TABLE_ORIGIN_FIRMWARE_PHYSICAL,
UACPI_TABLE_ORIGIN_HOST_VIRTUAL,
UACPI_TABLE_ORIGIN_HOST_PHYSICAL,
};
struct uacpi_installed_table {
uacpi_phys_addr phys_addr;
struct acpi_sdt_hdr hdr;
void *ptr;
uacpi_u16 reference_count;
#define UACPI_TABLE_LOADED (1 << 0)
#define UACPI_TABLE_CSUM_VERIFIED (1 << 1)
#define UACPI_TABLE_INVALID (1 << 2)
uacpi_u8 flags;
uacpi_u8 origin;
};
uacpi_status uacpi_initialize_tables(void);
void uacpi_deinitialize_tables(void);
uacpi_bool uacpi_signatures_match(const void *const lhs, const void *const rhs);
uacpi_status uacpi_check_table_signature(void *table, const uacpi_char *expect);
uacpi_status uacpi_verify_table_checksum(void *table, uacpi_size size);
uacpi_status uacpi_table_install_physical_with_origin(
uacpi_phys_addr phys, enum uacpi_table_origin origin, uacpi_table *out_table
);
uacpi_status uacpi_table_install_with_origin(
void *virt, enum uacpi_table_origin origin, uacpi_table *out_table
);
void uacpi_table_mark_as_loaded(uacpi_size idx);
uacpi_status uacpi_table_load_with_cause(
uacpi_size idx, enum uacpi_table_load_cause cause
);
typedef uacpi_iteration_decision (*uacpi_table_iteration_callback)
(void *user, struct uacpi_installed_table *tbl, uacpi_size idx);
uacpi_status uacpi_for_each_table(
uacpi_size base_idx, uacpi_table_iteration_callback, void *user
);
typedef uacpi_bool (*uacpi_table_match_callback)
(struct uacpi_installed_table *tbl);
uacpi_status uacpi_table_match(
uacpi_size base_idx, uacpi_table_match_callback, uacpi_table *out_table
);
#define UACPI_PRI_TBL_HDR "'%.4s' (OEM ID '%.6s' OEM Table ID '%.8s')"
#define UACPI_FMT_TBL_HDR(hdr) (hdr)->signature, (hdr)->oemid, (hdr)->oem_table_id

View file

@ -0,0 +1,311 @@
#pragma once
#include <uacpi/status.h>
#include <uacpi/types.h>
#include <uacpi/internal/shareable.h>
// object->flags field if object->type == UACPI_OBJECT_REFERENCE
enum uacpi_reference_kind {
UACPI_REFERENCE_KIND_REFOF = 0,
UACPI_REFERENCE_KIND_LOCAL = 1,
UACPI_REFERENCE_KIND_ARG = 2,
UACPI_REFERENCE_KIND_NAMED = 3,
UACPI_REFERENCE_KIND_PKG_INDEX = 4,
};
// object->flags field if object->type == UACPI_OBJECT_STRING
enum uacpi_string_kind {
UACPI_STRING_KIND_NORMAL = 0,
UACPI_STRING_KIND_PATH,
};
typedef struct uacpi_buffer {
struct uacpi_shareable shareable;
union {
void *data;
uacpi_u8 *byte_data;
uacpi_char *text;
};
uacpi_size size;
} uacpi_buffer;
typedef struct uacpi_package {
struct uacpi_shareable shareable;
uacpi_object **objects;
uacpi_size count;
} uacpi_package;
typedef struct uacpi_buffer_field {
uacpi_buffer *backing;
uacpi_size bit_index;
uacpi_u32 bit_length;
uacpi_bool force_buffer;
} uacpi_buffer_field;
typedef struct uacpi_buffer_index {
uacpi_size idx;
uacpi_buffer *buffer;
} uacpi_buffer_index;
typedef struct uacpi_mutex {
struct uacpi_shareable shareable;
uacpi_handle handle;
uacpi_thread_id owner;
uacpi_u16 depth;
uacpi_u8 sync_level;
} uacpi_mutex;
typedef struct uacpi_event {
struct uacpi_shareable shareable;
uacpi_handle handle;
} uacpi_event;
typedef struct uacpi_address_space_handler {
struct uacpi_shareable shareable;
uacpi_region_handler callback;
uacpi_handle user_context;
struct uacpi_address_space_handler *next;
struct uacpi_operation_region *regions;
uacpi_u16 space;
#define UACPI_ADDRESS_SPACE_HANDLER_DEFAULT (1 << 0)
uacpi_u16 flags;
} uacpi_address_space_handler;
/*
* NOTE: These are common object headers.
* Any changes to these structs must be propagated to all objects.
* ==============================================================
* Common for the following objects:
* - UACPI_OBJECT_OPERATION_REGION
* - UACPI_OBJECT_PROCESSOR
* - UACPI_OBJECT_DEVICE
* - UACPI_OBJECT_THERMAL_ZONE
*/
typedef struct uacpi_address_space_handlers {
struct uacpi_shareable shareable;
uacpi_address_space_handler *head;
} uacpi_address_space_handlers;
typedef struct uacpi_device_notify_handler {
uacpi_notify_handler callback;
uacpi_handle user_context;
struct uacpi_device_notify_handler *next;
} uacpi_device_notify_handler;
/*
* Common for the following objects:
* - UACPI_OBJECT_PROCESSOR
* - UACPI_OBJECT_DEVICE
* - UACPI_OBJECT_THERMAL_ZONE
*/
typedef struct uacpi_handlers {
struct uacpi_shareable shareable;
uacpi_address_space_handler *address_space_head;
uacpi_device_notify_handler *notify_head;
} uacpi_handlers;
// This region has a corresponding _REG method that was succesfully executed
#define UACPI_OP_REGION_STATE_REG_EXECUTED (1 << 0)
// This region was successfully attached to a handler
#define UACPI_OP_REGION_STATE_ATTACHED (1 << 1)
typedef struct uacpi_operation_region {
struct uacpi_shareable shareable;
uacpi_address_space_handler *handler;
uacpi_handle user_context;
uacpi_u16 space;
uacpi_u8 state_flags;
uacpi_u64 offset;
uacpi_u64 length;
// If space == TABLE_DATA
uacpi_u64 table_idx;
// Used to link regions sharing the same handler
struct uacpi_operation_region *next;
} uacpi_operation_region;
typedef struct uacpi_device {
struct uacpi_shareable shareable;
uacpi_address_space_handler *address_space_handlers;
uacpi_device_notify_handler *notify_handlers;
} uacpi_device;
typedef struct uacpi_processor {
struct uacpi_shareable shareable;
uacpi_address_space_handler *address_space_handlers;
uacpi_device_notify_handler *notify_handlers;
uacpi_u8 id;
uacpi_u32 block_address;
uacpi_u8 block_length;
} uacpi_processor;
typedef struct uacpi_thermal_zone {
struct uacpi_shareable shareable;
uacpi_address_space_handler *address_space_handlers;
uacpi_device_notify_handler *notify_handlers;
} uacpi_thermal_zone;
typedef struct uacpi_power_resource {
uacpi_u8 system_level;
uacpi_u16 resource_order;
} uacpi_power_resource;
typedef uacpi_status (*uacpi_native_call_handler)(
uacpi_handle ctx, uacpi_object *retval
);
typedef struct uacpi_control_method {
struct uacpi_shareable shareable;
union {
uacpi_u8 *code;
uacpi_native_call_handler handler;
};
uacpi_mutex *mutex;
uacpi_u32 size;
uacpi_u8 sync_level : 4;
uacpi_u8 args : 3;
uacpi_u8 is_serialized : 1;
uacpi_u8 named_objects_persist: 1;
uacpi_u8 native_call : 1;
uacpi_u8 owns_code : 1;
} uacpi_control_method;
typedef enum uacpi_access_type {
UACPI_ACCESS_TYPE_ANY = 0,
UACPI_ACCESS_TYPE_BYTE = 1,
UACPI_ACCESS_TYPE_WORD = 2,
UACPI_ACCESS_TYPE_DWORD = 3,
UACPI_ACCESS_TYPE_QWORD = 4,
UACPI_ACCESS_TYPE_BUFFER = 5,
} uacpi_access_type;
typedef enum uacpi_access_attributes {
UACPI_ACCESS_ATTRIBUTE_QUICK = 0x02,
UACPI_ACCESS_ATTRIBUTE_SEND_RECEIVE = 0x04,
UACPI_ACCESS_ATTRIBUTE_BYTE = 0x06,
UACPI_ACCESS_ATTRIBUTE_WORD = 0x08,
UACPI_ACCESS_ATTRIBUTE_BLOCK = 0x0A,
UACPI_ACCESS_ATTRIBUTE_BYTES = 0x0B,
UACPI_ACCESS_ATTRIBUTE_PROCESS_CALL = 0x0C,
UACPI_ACCESS_ATTRIBUTE_BLOCK_PROCESS_CALL = 0x0D,
UACPI_ACCESS_ATTRIBUTE_RAW_BYTES = 0x0E,
UACPI_ACCESS_ATTRIBUTE_RAW_PROCESS_BYTES = 0x0F,
} uacpi_access_attributes;
typedef enum uacpi_lock_rule {
UACPI_LOCK_RULE_NO_LOCK = 0,
UACPI_LOCK_RULE_LOCK = 1,
} uacpi_lock_rule;
typedef enum uacpi_update_rule {
UACPI_UPDATE_RULE_PRESERVE = 0,
UACPI_UPDATE_RULE_WRITE_AS_ONES = 1,
UACPI_UPDATE_RULE_WRITE_AS_ZEROES = 2,
} uacpi_update_rule;
typedef enum uacpi_field_unit_kind {
UACPI_FIELD_UNIT_KIND_NORMAL = 0,
UACPI_FIELD_UNIT_KIND_INDEX = 1,
UACPI_FIELD_UNIT_KIND_BANK = 2,
} uacpi_field_unit_kind;
typedef struct uacpi_field_unit {
struct uacpi_shareable shareable;
union {
// UACPI_FIELD_UNIT_KIND_NORMAL
struct {
uacpi_namespace_node *region;
};
// UACPI_FIELD_UNIT_KIND_INDEX
struct {
struct uacpi_field_unit *index;
struct uacpi_field_unit *data;
};
// UACPI_FIELD_UNIT_KIND_BANK
struct {
uacpi_namespace_node *bank_region;
struct uacpi_field_unit *bank_selection;
uacpi_u64 bank_value;
};
};
uacpi_object *connection;
uacpi_u32 byte_offset;
uacpi_u32 bit_length;
uacpi_u8 bit_offset_within_first_byte;
uacpi_u8 access_width_bytes;
uacpi_u8 access_length;
uacpi_u8 attributes : 4;
uacpi_u8 update_rule : 2;
uacpi_u8 kind : 2;
uacpi_u8 lock_rule : 1;
} uacpi_field_unit;
typedef struct uacpi_object {
struct uacpi_shareable shareable;
uacpi_u8 type;
uacpi_u8 flags;
union {
uacpi_u64 integer;
uacpi_package *package;
uacpi_buffer_field buffer_field;
uacpi_object *inner_object;
uacpi_control_method *method;
uacpi_buffer *buffer;
uacpi_mutex *mutex;
uacpi_event *event;
uacpi_buffer_index buffer_index;
uacpi_operation_region *op_region;
uacpi_device *device;
uacpi_processor *processor;
uacpi_thermal_zone *thermal_zone;
uacpi_address_space_handlers *address_space_handlers;
uacpi_handlers *handlers;
uacpi_power_resource power_resource;
uacpi_field_unit *field_unit;
};
} uacpi_object;
uacpi_object *uacpi_create_object(uacpi_object_type type);
enum uacpi_assign_behavior {
UACPI_ASSIGN_BEHAVIOR_DEEP_COPY,
UACPI_ASSIGN_BEHAVIOR_SHALLOW_COPY,
};
uacpi_status uacpi_object_assign(uacpi_object *dst, uacpi_object *src,
enum uacpi_assign_behavior);
void uacpi_object_attach_child(uacpi_object *parent, uacpi_object *child);
void uacpi_object_detach_child(uacpi_object *parent);
struct uacpi_object *uacpi_create_internal_reference(
enum uacpi_reference_kind kind, uacpi_object *child
);
uacpi_object *uacpi_unwrap_internal_reference(uacpi_object *object);
enum uacpi_prealloc_objects {
UACPI_PREALLOC_OBJECTS_NO,
UACPI_PREALLOC_OBJECTS_YES,
};
uacpi_bool uacpi_package_fill(
uacpi_package *pkg, uacpi_size num_elements,
enum uacpi_prealloc_objects prealloc_objects
);
uacpi_mutex *uacpi_create_mutex(void);
void uacpi_mutex_unref(uacpi_mutex*);
void uacpi_method_unref(uacpi_control_method*);
void uacpi_address_space_handler_unref(uacpi_address_space_handler *handler);

View file

@ -0,0 +1,45 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/utilities.h>
#include <uacpi/internal/log.h>
#include <uacpi/internal/stdlib.h>
static inline uacpi_phys_addr uacpi_truncate_phys_addr_with_warn(uacpi_u64 large_addr)
{
if (sizeof(uacpi_phys_addr) < 8 && large_addr > 0xFFFFFFFF) {
uacpi_warn(
"truncating a physical address 0x%"UACPI_PRIX64
" outside of address space\n", UACPI_FMT64(large_addr)
);
}
return (uacpi_phys_addr)large_addr;
}
#define UACPI_PTR_TO_VIRT_ADDR(ptr) ((uacpi_virt_addr)(ptr))
#define UACPI_VIRT_ADDR_TO_PTR(vaddr) ((void*)(vaddr))
#define UACPI_PTR_ADD(ptr, value) ((void*)(((uacpi_u8*)(ptr)) + value))
/*
* Target buffer must have a length of at least 8 bytes.
*/
void uacpi_eisa_id_to_string(uacpi_u32, uacpi_char *out_string);
enum uacpi_base {
UACPI_BASE_AUTO,
UACPI_BASE_OCT = 8,
UACPI_BASE_DEC = 10,
UACPI_BASE_HEX = 16,
};
uacpi_status uacpi_string_to_integer(
const uacpi_char *str, uacpi_size max_chars, enum uacpi_base base,
uacpi_u64 *out_value
);
uacpi_bool uacpi_is_valid_nameseg(uacpi_u8 *nameseg);
void uacpi_free_dynamic_string(const uacpi_char *str);
#define UACPI_NANOSECONDS_PER_SEC (1000ull * 1000ull * 1000ull)

15
src/include/uacpi/io.h Normal file
View file

@ -0,0 +1,15 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/acpi.h>
#ifdef __cplusplus
extern "C" {
#endif
uacpi_status uacpi_gas_read(const struct acpi_gas *gas, uacpi_u64 *value);
uacpi_status uacpi_gas_write(const struct acpi_gas *gas, uacpi_u64 value);
#ifdef __cplusplus
}
#endif

View file

@ -0,0 +1,289 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/platform/arch_helpers.h>
#ifdef __cplusplus
extern "C" {
#endif
/*
* Convenience initialization/deinitialization hooks that will be called by
* uACPI automatically when appropriate if compiled-in.
*/
#ifdef UACPI_KERNEL_INITIALIZATION
/*
* This API is invoked for each initialization level so that appropriate parts
* of the host kernel and/or glue code can be initialized at different stages.
*
* uACPI API that triggers calls to uacpi_kernel_initialize and the respective
* 'current_init_lvl' passed to the hook at that stage:
* 1. uacpi_initialize() -> UACPI_INIT_LEVEL_EARLY
* 2. uacpi_namespace_load() -> UACPI_INIT_LEVEL_SUBSYSTEM_INITIALIZED
* 3. (start of) uacpi_namespace_initialize() -> UACPI_INIT_LEVEL_NAMESPACE_LOADED
* 4. (end of) uacpi_namespace_initialize() -> UACPI_INIT_LEVEL_NAMESPACE_INITIALIZED
*/
uacpi_status uacpi_kernel_initialize(uacpi_init_level current_init_lvl);
void uacpi_kernel_deinitialize(void);
#endif
// Returns the PHYSICAL address of the RSDP structure via *out_rsdp_address.
uacpi_status uacpi_kernel_get_rsdp(uacpi_phys_addr *out_rsdp_address);
/*
* Open a PCI device at 'address' for reading & writing.
*
* The handle returned via 'out_handle' is used to perform IO on the
* configuration space of the device.
*/
uacpi_status uacpi_kernel_pci_device_open(
uacpi_pci_address address, uacpi_handle *out_handle
);
void uacpi_kernel_pci_device_close(uacpi_handle);
/*
* Read & write the configuration space of a previously open PCI device.
*
* NOTE:
* 'byte_width' is ALWAYS one of 1, 2, 4. Since PCI registers are 32 bits wide
* this must be able to handle e.g. a 1-byte access by reading at the nearest
* 4-byte aligned offset below, then masking the value to select the target
* byte.
*/
uacpi_status uacpi_kernel_pci_read(
uacpi_handle device, uacpi_size offset,
uacpi_u8 byte_width, uacpi_u64 *value
);
uacpi_status uacpi_kernel_pci_write(
uacpi_handle device, uacpi_size offset,
uacpi_u8 byte_width, uacpi_u64 value
);
/*
* Map a SystemIO address at [base, base + len) and return a kernel-implemented
* handle that can be used for reading and writing the IO range.
*/
uacpi_status uacpi_kernel_io_map(
uacpi_io_addr base, uacpi_size len, uacpi_handle *out_handle
);
void uacpi_kernel_io_unmap(uacpi_handle handle);
/*
* Read/Write the IO range mapped via uacpi_kernel_io_map
* at a 0-based 'offset' within the range.
*
* NOTE:
* 'byte_width' is ALWAYS one of 1, 2, 4. You are NOT allowed to break e.g. a
* 4-byte access into four 1-byte accesses. Hardware ALWAYS expects accesses to
* be of the exact width.
*/
uacpi_status uacpi_kernel_io_read(
uacpi_handle, uacpi_size offset,
uacpi_u8 byte_width, uacpi_u64 *value
);
uacpi_status uacpi_kernel_io_write(
uacpi_handle, uacpi_size offset,
uacpi_u8 byte_width, uacpi_u64 value
);
void *uacpi_kernel_map(uacpi_phys_addr addr, uacpi_size len);
void uacpi_kernel_unmap(void *addr, uacpi_size len);
/*
* Allocate a block of memory of 'size' bytes.
* The contents of the allocated memory are unspecified.
*/
void *uacpi_kernel_alloc(uacpi_size size);
#ifdef UACPI_NATIVE_ALLOC_ZEROED
/*
* Allocate a block of memory of 'size' bytes.
* The returned memory block is expected to be zero-filled.
*/
void *uacpi_kernel_alloc_zeroed(uacpi_size size);
#endif
/*
* Free a previously allocated memory block.
*
* 'mem' might be a NULL pointer. In this case, the call is assumed to be a
* no-op.
*
* An optionally enabled 'size_hint' parameter contains the size of the original
* allocation. Note that in some scenarios this incurs additional cost to
* calculate the object size.
*/
#ifndef UACPI_SIZED_FREES
void uacpi_kernel_free(void *mem);
#else
void uacpi_kernel_free(void *mem, uacpi_size size_hint);
#endif
#ifndef UACPI_FORMATTED_LOGGING
void uacpi_kernel_log(uacpi_log_level, const uacpi_char*);
#else
UACPI_PRINTF_DECL(2, 3)
void uacpi_kernel_log(uacpi_log_level, const uacpi_char*, ...);
void uacpi_kernel_vlog(uacpi_log_level, const uacpi_char*, uacpi_va_list);
#endif
/*
* Returns the number of nanosecond ticks elapsed since boot,
* strictly monotonic.
*/
uacpi_u64 uacpi_kernel_get_nanoseconds_since_boot(void);
/*
* Spin for N microseconds.
*/
void uacpi_kernel_stall(uacpi_u8 usec);
/*
* Sleep for N milliseconds.
*/
void uacpi_kernel_sleep(uacpi_u64 msec);
/*
* Create/free an opaque non-recursive kernel mutex object.
*/
uacpi_handle uacpi_kernel_create_mutex(void);
void uacpi_kernel_free_mutex(uacpi_handle);
/*
* Create/free an opaque kernel (semaphore-like) event object.
*/
uacpi_handle uacpi_kernel_create_event(void);
void uacpi_kernel_free_event(uacpi_handle);
/*
* Returns a unique identifier of the currently executing thread.
*
* The returned thread id cannot be UACPI_THREAD_ID_NONE.
*/
uacpi_thread_id uacpi_kernel_get_thread_id(void);
/*
* Try to acquire the mutex with a millisecond timeout.
*
* The timeout value has the following meanings:
* 0x0000 - Attempt to acquire the mutex once, in a non-blocking manner
* 0x0001...0xFFFE - Attempt to acquire the mutex for at least 'timeout'
* milliseconds
* 0xFFFF - Infinite wait, block until the mutex is acquired
*
* The following are possible return values:
* 1. UACPI_STATUS_OK - successful acquire operation
* 2. UACPI_STATUS_TIMEOUT - timeout reached while attempting to acquire (or the
* single attempt to acquire was not successful for
* calls with timeout=0)
* 3. Any other value - signifies a host internal error and is treated as such
*/
uacpi_status uacpi_kernel_acquire_mutex(uacpi_handle, uacpi_u16);
void uacpi_kernel_release_mutex(uacpi_handle);
/*
* Try to wait for an event (counter > 0) with a millisecond timeout.
* A timeout value of 0xFFFF implies infinite wait.
*
* The internal counter is decremented by 1 if wait was successful.
*
* A successful wait is indicated by returning UACPI_TRUE.
*/
uacpi_bool uacpi_kernel_wait_for_event(uacpi_handle, uacpi_u16);
/*
* Signal the event object by incrementing its internal counter by 1.
*
* This function may be used in interrupt contexts.
*/
void uacpi_kernel_signal_event(uacpi_handle);
/*
* Reset the event counter to 0.
*/
void uacpi_kernel_reset_event(uacpi_handle);
/*
* Handle a firmware request.
*
* Currently either a Breakpoint or Fatal operators.
*/
uacpi_status uacpi_kernel_handle_firmware_request(uacpi_firmware_request*);
/*
* Install an interrupt handler at 'irq', 'ctx' is passed to the provided
* handler for every invocation.
*
* 'out_irq_handle' is set to a kernel-implemented value that can be used to
* refer to this handler from other API.
*/
uacpi_status uacpi_kernel_install_interrupt_handler(
uacpi_u32 irq, uacpi_interrupt_handler, uacpi_handle ctx,
uacpi_handle *out_irq_handle
);
/*
* Uninstall an interrupt handler. 'irq_handle' is the value returned via
* 'out_irq_handle' during installation.
*/
uacpi_status uacpi_kernel_uninstall_interrupt_handler(
uacpi_interrupt_handler, uacpi_handle irq_handle
);
/*
* Create/free a kernel spinlock object.
*
* Unlike other types of locks, spinlocks may be used in interrupt contexts.
*/
uacpi_handle uacpi_kernel_create_spinlock(void);
void uacpi_kernel_free_spinlock(uacpi_handle);
/*
* Lock/unlock helpers for spinlocks.
*
* These are expected to disable interrupts, returning the previous state of cpu
* flags, that can be used to possibly re-enable interrupts if they were enabled
* before.
*
* Note that lock is infalliable.
*/
uacpi_cpu_flags uacpi_kernel_lock_spinlock(uacpi_handle);
void uacpi_kernel_unlock_spinlock(uacpi_handle, uacpi_cpu_flags);
typedef enum uacpi_work_type {
/*
* Schedule a GPE handler method for execution.
* This should be scheduled to run on CPU0 to avoid potential SMI-related
* firmware bugs.
*/
UACPI_WORK_GPE_EXECUTION,
/*
* Schedule a Notify(device) firmware request for execution.
* This can run on any CPU.
*/
UACPI_WORK_NOTIFICATION,
} uacpi_work_type;
typedef void (*uacpi_work_handler)(uacpi_handle);
/*
* Schedules deferred work for execution.
* Might be invoked from an interrupt context.
*/
uacpi_status uacpi_kernel_schedule_work(
uacpi_work_type, uacpi_work_handler, uacpi_handle ctx
);
/*
* Waits for two types of work to finish:
* 1. All in-flight interrupts installed via uacpi_kernel_install_interrupt_handler
* 2. All work scheduled via uacpi_kernel_schedule_work
*
* Note that the waits must be done in this order specifically.
*/
uacpi_status uacpi_kernel_wait_for_work_completion(void);
#ifdef __cplusplus
}
#endif

View file

@ -0,0 +1,136 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/status.h>
#ifdef __cplusplus
extern "C" {
#endif
typedef struct uacpi_namespace_node uacpi_namespace_node;
uacpi_namespace_node *uacpi_namespace_root(void);
typedef enum uacpi_predefined_namespace {
UACPI_PREDEFINED_NAMESPACE_ROOT = 0,
UACPI_PREDEFINED_NAMESPACE_GPE,
UACPI_PREDEFINED_NAMESPACE_PR,
UACPI_PREDEFINED_NAMESPACE_SB,
UACPI_PREDEFINED_NAMESPACE_SI,
UACPI_PREDEFINED_NAMESPACE_TZ,
UACPI_PREDEFINED_NAMESPACE_GL,
UACPI_PREDEFINED_NAMESPACE_OS,
UACPI_PREDEFINED_NAMESPACE_OSI,
UACPI_PREDEFINED_NAMESPACE_REV,
UACPI_PREDEFINED_NAMESPACE_MAX = UACPI_PREDEFINED_NAMESPACE_REV,
} uacpi_predefined_namespace;
uacpi_namespace_node *uacpi_namespace_get_predefined(
uacpi_predefined_namespace
);
/*
* Returns UACPI_TRUE if the provided 'node' is an alias.
*/
uacpi_bool uacpi_namespace_node_is_alias(uacpi_namespace_node *node);
uacpi_object_name uacpi_namespace_node_name(const uacpi_namespace_node *node);
/*
* Returns the type of object stored at the namespace node.
*
* NOTE: due to the existance of the CopyObject operator in AML, the
* return value of this function is subject to TOCTOU bugs.
*/
uacpi_status uacpi_namespace_node_type(
const uacpi_namespace_node *node, uacpi_object_type *out_type
);
/*
* Returns UACPI_TRUE via 'out' if the type of the object stored at the
* namespace node matches the provided value, UACPI_FALSE otherwise.
*
* NOTE: due to the existance of the CopyObject operator in AML, the
* return value of this function is subject to TOCTOU bugs.
*/
uacpi_status uacpi_namespace_node_is(
const uacpi_namespace_node *node, uacpi_object_type type, uacpi_bool *out
);
/*
* Returns UACPI_TRUE via 'out' if the type of the object stored at the
* namespace node matches any of the type bits in the provided value,
* UACPI_FALSE otherwise.
*
* NOTE: due to the existance of the CopyObject operator in AML, the
* return value of this function is subject to TOCTOU bugs.
*/
uacpi_status uacpi_namespace_node_is_one_of(
const uacpi_namespace_node *node, uacpi_object_type_bits type_mask,
uacpi_bool *out
);
uacpi_size uacpi_namespace_node_depth(const uacpi_namespace_node *node);
uacpi_namespace_node *uacpi_namespace_node_parent(
uacpi_namespace_node *node
);
uacpi_status uacpi_namespace_node_find(
uacpi_namespace_node *parent,
const uacpi_char *path,
uacpi_namespace_node **out_node
);
/*
* Same as uacpi_namespace_node_find, except the search recurses upwards when
* the namepath consists of only a single nameseg. Usually, this behavior is
* only desired if resolving a namepath specified in an aml-provided object,
* such as a package element.
*/
uacpi_status uacpi_namespace_node_resolve_from_aml_namepath(
uacpi_namespace_node *scope,
const uacpi_char *path,
uacpi_namespace_node **out_node
);
typedef uacpi_iteration_decision (*uacpi_iteration_callback) (
void *user, uacpi_namespace_node *node, uacpi_u32 node_depth
);
#define UACPI_MAX_DEPTH_ANY 0xFFFFFFFF
/*
* Depth-first iterate the namespace starting at the first child of 'parent'.
*/
uacpi_status uacpi_namespace_for_each_child_simple(
uacpi_namespace_node *parent, uacpi_iteration_callback callback, void *user
);
/*
* Depth-first iterate the namespace starting at the first child of 'parent'.
*
* 'descending_callback' is invoked the first time a node is visited when
* walking down. 'ascending_callback' is invoked the second time a node is
* visited after we reach the leaf node without children and start walking up.
* Either of the callbacks may be NULL, but not both at the same time.
*
* Only nodes matching 'type_mask' are passed to the callbacks.
*
* 'max_depth' is used to limit the maximum reachable depth from 'parent',
* where 1 is only direct children of 'parent', 2 is children of first-level
* children etc. Use UACPI_MAX_DEPTH_ANY or -1 to specify infinite depth.
*/
uacpi_status uacpi_namespace_for_each_child(
uacpi_namespace_node *parent, uacpi_iteration_callback descending_callback,
uacpi_iteration_callback ascending_callback,
uacpi_object_type_bits type_mask, uacpi_u32 max_depth, void *user
);
const uacpi_char *uacpi_namespace_node_generate_absolute_path(
const uacpi_namespace_node *node
);
void uacpi_free_absolute_path(const uacpi_char *path);
#ifdef __cplusplus
}
#endif

View file

@ -0,0 +1,26 @@
#pragma once
#include <uacpi/types.h>
#ifdef __cplusplus
extern "C" {
#endif
/*
* Install a Notify() handler to a device node.
* A handler installed to the root node will receive all notifications, even if
* a device already has a dedicated Notify handler.
* 'handler_context' is passed to the handler on every invocation.
*/
uacpi_status uacpi_install_notify_handler(
uacpi_namespace_node *node, uacpi_notify_handler handler,
uacpi_handle handler_context
);
uacpi_status uacpi_uninstall_notify_handler(
uacpi_namespace_node *node, uacpi_notify_handler handler
);
#ifdef __cplusplus
}
#endif

View file

@ -0,0 +1,43 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/status.h>
#ifdef __cplusplus
extern "C" {
#endif
/*
* Install an address space handler to a device node.
* The handler is recursively connected to all of the operation regions of
* type 'space' underneath 'device_node'. Note that this recursion stops as
* soon as another device node that already has an address space handler of
* this type installed is encountered.
*/
uacpi_status uacpi_install_address_space_handler(
uacpi_namespace_node *device_node, enum uacpi_address_space space,
uacpi_region_handler handler, uacpi_handle handler_context
);
/*
* Uninstall the handler of type 'space' from a given device node.
*/
uacpi_status uacpi_uninstall_address_space_handler(
uacpi_namespace_node *device_node,
enum uacpi_address_space space
);
/*
* Execute _REG(space, ACPI_REG_CONNECT) for all of the opregions with this
* address space underneath this device. This should only be called manually
* if you want to register an early handler that must be available before the
* call to uacpi_namespace_initialize().
*/
uacpi_status uacpi_reg_all_opregions(
uacpi_namespace_node *device_node,
enum uacpi_address_space space
);
#ifdef __cplusplus
}
#endif

121
src/include/uacpi/osi.h Normal file
View file

@ -0,0 +1,121 @@
#pragma once
#include <uacpi/platform/types.h>
#include <uacpi/status.h>
#ifdef __cplusplus
extern "C" {
#endif
typedef enum uacpi_vendor_interface {
UACPI_VENDOR_INTERFACE_NONE = 0,
UACPI_VENDOR_INTERFACE_WINDOWS_2000,
UACPI_VENDOR_INTERFACE_WINDOWS_XP,
UACPI_VENDOR_INTERFACE_WINDOWS_XP_SP1,
UACPI_VENDOR_INTERFACE_WINDOWS_SERVER_2003,
UACPI_VENDOR_INTERFACE_WINDOWS_XP_SP2,
UACPI_VENDOR_INTERFACE_WINDOWS_SERVER_2003_SP1,
UACPI_VENDOR_INTERFACE_WINDOWS_VISTA,
UACPI_VENDOR_INTERFACE_WINDOWS_SERVER_2008,
UACPI_VENDOR_INTERFACE_WINDOWS_VISTA_SP1,
UACPI_VENDOR_INTERFACE_WINDOWS_VISTA_SP2,
UACPI_VENDOR_INTERFACE_WINDOWS_7,
UACPI_VENDOR_INTERFACE_WINDOWS_8,
UACPI_VENDOR_INTERFACE_WINDOWS_8_1,
UACPI_VENDOR_INTERFACE_WINDOWS_10,
UACPI_VENDOR_INTERFACE_WINDOWS_10_RS1,
UACPI_VENDOR_INTERFACE_WINDOWS_10_RS2,
UACPI_VENDOR_INTERFACE_WINDOWS_10_RS3,
UACPI_VENDOR_INTERFACE_WINDOWS_10_RS4,
UACPI_VENDOR_INTERFACE_WINDOWS_10_RS5,
UACPI_VENDOR_INTERFACE_WINDOWS_10_19H1,
UACPI_VENDOR_INTERFACE_WINDOWS_10_20H1,
UACPI_VENDOR_INTERFACE_WINDOWS_11,
UACPI_VENDOR_INTERFACE_WINDOWS_11_22H2,
} uacpi_vendor_interface;
/*
* Returns the "latest" AML-queried _OSI vendor interface.
*
* E.g. for the following AML code:
* _OSI("Windows 2021")
* _OSI("Windows 2000")
*
* This function will return UACPI_VENDOR_INTERFACE_WINDOWS_11, since this is
* the latest version of the interface the code queried, even though the
* "Windows 2000" query came after "Windows 2021".
*/
uacpi_vendor_interface uacpi_latest_queried_vendor_interface(void);
typedef enum uacpi_interface_kind {
UACPI_INTERFACE_KIND_VENDOR = (1 << 0),
UACPI_INTERFACE_KIND_FEATURE = (1 << 1),
UACPI_INTERFACE_KIND_ALL = UACPI_INTERFACE_KIND_VENDOR |
UACPI_INTERFACE_KIND_FEATURE,
} uacpi_interface_kind;
/*
* Install or uninstall an interface.
*
* The interface kind is used for matching during interface enumeration in
* uacpi_bulk_configure_interfaces().
*
* After installing an interface, all _OSI queries report it as supported.
*/
uacpi_status uacpi_install_interface(
const uacpi_char *name, uacpi_interface_kind
);
uacpi_status uacpi_uninstall_interface(const uacpi_char *name);
typedef enum uacpi_host_interface {
UACPI_HOST_INTERFACE_MODULE_DEVICE = 1,
UACPI_HOST_INTERFACE_PROCESSOR_DEVICE,
UACPI_HOST_INTERFACE_3_0_THERMAL_MODEL,
UACPI_HOST_INTERFACE_3_0_SCP_EXTENSIONS,
UACPI_HOST_INTERFACE_PROCESSOR_AGGREGATOR_DEVICE,
} uacpi_host_interface;
/*
* Same as install/uninstall interface, but comes with an enum of known
* interfaces defined by the ACPI specification. These are disabled by default
* as they depend on the host kernel support.
*/
uacpi_status uacpi_enable_host_interface(uacpi_host_interface);
uacpi_status uacpi_disable_host_interface(uacpi_host_interface);
typedef uacpi_bool (*uacpi_interface_handler)
(const uacpi_char *name, uacpi_bool supported);
/*
* Set a custom interface query (_OSI) handler.
*
* This callback will be invoked for each _OSI query with the value
* passed in the _OSI, as well as whether the interface was detected as
* supported. The callback is able to override the return value dynamically
* or leave it untouched if desired (e.g. if it simply wants to log something or
* do internal bookkeeping of some kind).
*/
uacpi_status uacpi_set_interface_query_handler(uacpi_interface_handler);
typedef enum uacpi_interface_action {
UACPI_INTERFACE_ACTION_DISABLE = 0,
UACPI_INTERFACE_ACTION_ENABLE,
} uacpi_interface_action;
/*
* Bulk interface configuration, used to disable or enable all interfaces that
* match 'kind'.
*
* This is generally only needed to work around buggy hardware, for example if
* requested from the kernel command line.
*
* By default, all vendor strings (like "Windows 2000") are enabled, and all
* host features (like "3.0 Thermal Model") are disabled.
*/
uacpi_status uacpi_bulk_configure_interfaces(
uacpi_interface_action action, uacpi_interface_kind kind
);
#ifdef __cplusplus
}
#endif

View file

@ -0,0 +1,38 @@
#pragma once
#ifdef UACPI_OVERRIDE_ARCH_HELPERS
#include "uacpi_arch_helpers.h"
#else
#include <uacpi/platform/atomic.h>
#ifndef UACPI_ARCH_FLUSH_CPU_CACHE
#define UACPI_ARCH_FLUSH_CPU_CACHE() do {} while (0)
#endif
typedef unsigned long uacpi_cpu_flags;
typedef void *uacpi_thread_id;
/*
* Replace as needed depending on your platform's way to represent thread ids.
* uACPI offers a few more helpers like uacpi_atomic_{load,store}{8,16,32,64,ptr}
* (or you could provide your own helpers)
*/
#ifndef UACPI_ATOMIC_LOAD_THREAD_ID
#define UACPI_ATOMIC_LOAD_THREAD_ID(ptr) ((uacpi_thread_id)uacpi_atomic_load_ptr(ptr))
#endif
#ifndef UACPI_ATOMIC_STORE_THREAD_ID
#define UACPI_ATOMIC_STORE_THREAD_ID(ptr, value) uacpi_atomic_store_ptr(ptr, value)
#endif
/*
* A sentinel value that the kernel promises to NEVER return from
* uacpi_kernel_get_current_thread_id or this will break
*/
#ifndef UACPI_THREAD_ID_NONE
#define UACPI_THREAD_ID_NONE ((uacpi_thread_id)-1)
#endif
#endif

View file

@ -0,0 +1,129 @@
#pragma once
/*
* Most of this header is a giant workaround for MSVC to make atomics into a
* somewhat unified interface with how GCC and Clang handle them.
*
* We don't use the absolutely disgusting C11 stdatomic.h header because it is
* unable to operate on non _Atomic types, which enforce implicit sequential
* consistency and alter the behavior of the standard C binary/unary operators.
*
* The strictness of the atomic helpers defined here is assumed to be at least
* acquire for loads and release for stores. Cmpxchg uses the standard acq/rel
* for success, acq for failure, and is assumed to be strong.
*/
#ifdef UACPI_OVERRIDE_ATOMIC
#include "uacpi_atomic.h"
#else
#include <uacpi/platform/compiler.h>
#ifdef _MSC_VER
#include <intrin.h>
// mimic __atomic_compare_exchange_n that doesn't exist on MSVC
#define UACPI_MAKE_MSVC_CMPXCHG(width, type, suffix) \
static inline int uacpi_do_atomic_cmpxchg##width( \
type volatile *ptr, type volatile *expected, type desired \
) \
{ \
type current; \
\
current = _InterlockedCompareExchange##suffix(ptr, *expected, desired); \
if (current != *expected) { \
*expected = current; \
return 0; \
} \
return 1; \
}
#define UACPI_MSVC_CMPXCHG_INVOKE(ptr, expected, desired, width, type) \
uacpi_do_atomic_cmpxchg##width( \
(type volatile*)ptr, (type volatile*)expected, desired \
)
#define UACPI_MSVC_ATOMIC_STORE(ptr, value, type, width) \
_InterlockedExchange##width((type volatile*)(ptr), (type)(value))
#define UACPI_MSVC_ATOMIC_LOAD(ptr, type, width) \
_InterlockedOr##width((type volatile*)(ptr), 0)
#define UACPI_MSVC_ATOMIC_INC(ptr, type, width) \
_InterlockedIncrement##width((type volatile*)(ptr))
#define UACPI_MSVC_ATOMIC_DEC(ptr, type, width) \
_InterlockedDecrement##width((type volatile*)(ptr))
UACPI_MAKE_MSVC_CMPXCHG(64, __int64, 64)
UACPI_MAKE_MSVC_CMPXCHG(32, long,)
UACPI_MAKE_MSVC_CMPXCHG(16, short, 16)
#define uacpi_atomic_cmpxchg16(ptr, expected, desired) \
UACPI_MSVC_CMPXCHG_INVOKE(ptr, expected, desired, 16, short)
#define uacpi_atomic_cmpxchg32(ptr, expected, desired) \
UACPI_MSVC_CMPXCHG_INVOKE(ptr, expected, desired, 32, long)
#define uacpi_atomic_cmpxchg64(ptr, expected, desired) \
UACPI_MSVC_CMPXCHG_INVOKE(ptr, expected, desired, 64, __int64)
#define uacpi_atomic_load8(ptr) UACPI_MSVC_ATOMIC_LOAD(ptr, char, 8)
#define uacpi_atomic_load16(ptr) UACPI_MSVC_ATOMIC_LOAD(ptr, short, 16)
#define uacpi_atomic_load32(ptr) UACPI_MSVC_ATOMIC_LOAD(ptr, long,)
#define uacpi_atomic_load64(ptr) UACPI_MSVC_ATOMIC_LOAD(ptr, __int64, 64)
#define uacpi_atomic_store8(ptr, value) UACPI_MSVC_ATOMIC_STORE(ptr, value, char, 8)
#define uacpi_atomic_store16(ptr, value) UACPI_MSVC_ATOMIC_STORE(ptr, value, short, 16)
#define uacpi_atomic_store32(ptr, value) UACPI_MSVC_ATOMIC_STORE(ptr, value, long,)
#define uacpi_atomic_store64(ptr, value) UACPI_MSVC_ATOMIC_STORE(ptr, value, __int64, 64)
#define uacpi_atomic_inc16(ptr) UACPI_MSVC_ATOMIC_INC(ptr, short, 16)
#define uacpi_atomic_inc32(ptr) UACPI_MSVC_ATOMIC_INC(ptr, long,)
#define uacpi_atomic_inc64(ptr) UACPI_MSVC_ATOMIC_INC(ptr, __int64, 64)
#define uacpi_atomic_dec16(ptr) UACPI_MSVC_ATOMIC_DEC(ptr, short, 16)
#define uacpi_atomic_dec32(ptr) UACPI_MSVC_ATOMIC_DEC(ptr, long,)
#define uacpi_atomic_dec64(ptr) UACPI_MSVC_ATOMIC_DEC(ptr, __int64, 64)
#else
#define UACPI_DO_CMPXCHG(ptr, expected, desired) \
__atomic_compare_exchange_n(ptr, expected, desired, 0, \
__ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE)
#define uacpi_atomic_cmpxchg16(ptr, expected, desired) \
UACPI_DO_CMPXCHG(ptr, expected, desired)
#define uacpi_atomic_cmpxchg32(ptr, expected, desired) \
UACPI_DO_CMPXCHG(ptr, expected, desired)
#define uacpi_atomic_cmpxchg64(ptr, expected, desired) \
UACPI_DO_CMPXCHG(ptr, expected, desired)
#define uacpi_atomic_load8(ptr) __atomic_load_n(ptr, __ATOMIC_ACQUIRE)
#define uacpi_atomic_load16(ptr) __atomic_load_n(ptr, __ATOMIC_ACQUIRE)
#define uacpi_atomic_load32(ptr) __atomic_load_n(ptr, __ATOMIC_ACQUIRE)
#define uacpi_atomic_load64(ptr) __atomic_load_n(ptr, __ATOMIC_ACQUIRE)
#define uacpi_atomic_store8(ptr, value) __atomic_store_n(ptr, value, __ATOMIC_RELEASE)
#define uacpi_atomic_store16(ptr, value) __atomic_store_n(ptr, value, __ATOMIC_RELEASE)
#define uacpi_atomic_store32(ptr, value) __atomic_store_n(ptr, value, __ATOMIC_RELEASE)
#define uacpi_atomic_store64(ptr, value) __atomic_store_n(ptr, value, __ATOMIC_RELEASE)
#define uacpi_atomic_inc16(ptr) __atomic_add_fetch(ptr, 1, __ATOMIC_ACQ_REL)
#define uacpi_atomic_inc32(ptr) __atomic_add_fetch(ptr, 1, __ATOMIC_ACQ_REL)
#define uacpi_atomic_inc64(ptr) __atomic_add_fetch(ptr, 1, __ATOMIC_ACQ_REL)
#define uacpi_atomic_dec16(ptr) __atomic_sub_fetch(ptr, 1, __ATOMIC_ACQ_REL)
#define uacpi_atomic_dec32(ptr) __atomic_sub_fetch(ptr, 1, __ATOMIC_ACQ_REL)
#define uacpi_atomic_dec64(ptr) __atomic_sub_fetch(ptr, 1, __ATOMIC_ACQ_REL)
#endif
#if UACPI_POINTER_SIZE == 4
#define uacpi_atomic_load_ptr(ptr_to_ptr) uacpi_atomic_load32(ptr_to_ptr)
#define uacpi_atomic_store_ptr(ptr_to_ptr, value) uacpi_atomic_store32(ptr_to_ptr, value)
#else
#define uacpi_atomic_load_ptr(ptr_to_ptr) uacpi_atomic_load64(ptr_to_ptr)
#define uacpi_atomic_store_ptr(ptr_to_ptr, value) uacpi_atomic_store64(ptr_to_ptr, value)
#endif
#endif

View file

@ -0,0 +1,82 @@
#pragma once
/*
* Compiler-specific attributes/macros go here. This is the default placeholder
* that should work for MSVC/GCC/clang.
*/
#ifdef UACPI_OVERRIDE_COMPILER
#include "uacpi_compiler.h"
#else
#define UACPI_ALIGN(x) __declspec(align(x))
#ifdef _MSC_VER
#include <intrin.h>
#define UACPI_ALWAYS_INLINE __forceinline
#define UACPI_PACKED(decl) \
__pragma(pack(push, 1)) \
decl; \
__pragma(pack(pop))
#else
#define UACPI_ALWAYS_INLINE inline __attribute__((always_inline))
#define UACPI_PACKED(decl) decl __attribute__((packed));
#endif
#ifdef __GNUC__
#define uacpi_unlikely(expr) __builtin_expect(!!(expr), 0)
#define uacpi_likely(expr) __builtin_expect(!!(expr), 1)
#if __has_attribute(__fallthrough__)
#define UACPI_FALLTHROUGH __attribute__((__fallthrough__))
#endif
#define UACPI_MAYBE_UNUSED __attribute__ ((unused))
#define UACPI_NO_UNUSED_PARAMETER_WARNINGS_BEGIN \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wunused-parameter\"")
#define UACPI_NO_UNUSED_PARAMETER_WARNINGS_END \
_Pragma("GCC diagnostic pop")
#ifdef __clang__
#define UACPI_PRINTF_DECL(fmt_idx, args_idx) \
__attribute__((format(printf, fmt_idx, args_idx)))
#else
#define UACPI_PRINTF_DECL(fmt_idx, args_idx) \
__attribute__((format(gnu_printf, fmt_idx, args_idx)))
#endif
#else
#define uacpi_unlikely(expr) expr
#define uacpi_likely(expr) expr
#define UACPI_MAYBE_UNUSED
#define UACPI_NO_UNUSED_PARAMETER_WARNINGS_BEGIN
#define UACPI_NO_UNUSED_PARAMETER_WARNINGS_END
#define UACPI_PRINTF_DECL(fmt_idx, args_idx)
#endif
#ifndef UACPI_FALLTHROUGH
#define UACPI_FALLTHROUGH do {} while (0)
#endif
#ifndef UACPI_POINTER_SIZE
#ifdef _WIN32
#ifdef _WIN64
#define UACPI_POINTER_SIZE 8
#else
#define UACPI_POINTER_SIZE 4
#endif
#elif defined(__GNUC__)
#define UACPI_POINTER_SIZE __SIZEOF_POINTER__
#else
#error Failed to detect pointer size
#endif
#endif
#endif

View file

@ -0,0 +1,130 @@
#pragma once
#ifdef UACPI_OVERRIDE_CONFIG
#include "uacpi_config.h"
#else
#include <uacpi/helpers.h>
#include <uacpi/types.h>
/*
* =======================
* Context-related options
* =======================
*/
#ifndef UACPI_DEFAULT_LOG_LEVEL
#define UACPI_DEFAULT_LOG_LEVEL UACPI_LOG_INFO
#endif
UACPI_BUILD_BUG_ON_WITH_MSG(
UACPI_DEFAULT_LOG_LEVEL < UACPI_LOG_ERROR ||
UACPI_DEFAULT_LOG_LEVEL > UACPI_LOG_DEBUG,
"configured default log level is invalid"
);
#ifndef UACPI_DEFAULT_LOOP_TIMEOUT_SECONDS
#define UACPI_DEFAULT_LOOP_TIMEOUT_SECONDS 30
#endif
UACPI_BUILD_BUG_ON_WITH_MSG(
UACPI_DEFAULT_LOOP_TIMEOUT_SECONDS < 1,
"configured default loop timeout is invalid (expecting at least 1 second)"
);
#ifndef UACPI_DEFAULT_MAX_CALL_STACK_DEPTH
#define UACPI_DEFAULT_MAX_CALL_STACK_DEPTH 256
#endif
UACPI_BUILD_BUG_ON_WITH_MSG(
UACPI_DEFAULT_MAX_CALL_STACK_DEPTH < 4,
"configured default max call stack depth is invalid "
"(expecting at least 4 frames)"
);
/*
* ===================
* Kernel-api options
* ===================
*/
/*
* Convenience initialization/deinitialization hooks that will be called by
* uACPI automatically when appropriate if compiled-in.
*/
// #define UACPI_KERNEL_INITIALIZATION
/*
* Makes kernel api logging callbacks work with unformatted printf-style
* strings and va_args instead of a pre-formatted string. Can be useful if
* your native logging is implemented in terms of this format as well.
*/
// #define UACPI_FORMATTED_LOGGING
/*
* Makes uacpi_kernel_free take in an additional 'size_hint' parameter, which
* contains the size of the original allocation. Note that this comes with a
* performance penalty in some cases.
*/
// #define UACPI_SIZED_FREES
/*
* Makes uacpi_kernel_alloc_zeroed mandatory to implement by the host, uACPI
* will not provide a default implementation if this is enabled.
*/
// #define UACPI_NATIVE_ALLOC_ZEROED
/*
* =========================
* Platform-specific options
* =========================
*/
/*
* Turns uacpi_phys_addr and uacpi_io_addr into a 32-bit type, and adds extra
* code for address truncation. Needed for e.g. i686 platforms without PAE
* support.
*/
// #define UACPI_PHYS_ADDR_IS_32BITS
/*
* Switches uACPI into reduced-hardware-only mode. Strips all full-hardware
* ACPI support code at compile-time, including the event subsystem, the global
* lock, and other full-hardware features.
*/
// #define UACPI_REDUCED_HARDWARE
/*
* =============
* Misc. options
* =============
*/
/*
* If UACPI_FORMATTED_LOGGING is not enabled, this is the maximum length of the
* pre-formatted message that is passed to the logging callback.
*/
#ifndef UACPI_PLAIN_LOG_BUFFER_SIZE
#define UACPI_PLAIN_LOG_BUFFER_SIZE 128
#endif
UACPI_BUILD_BUG_ON_WITH_MSG(
UACPI_PLAIN_LOG_BUFFER_SIZE < 16,
"configured log buffer size is too small (expecting at least 16 bytes)"
);
/*
* The size of the table descriptor inline storage. All table descriptors past
* this length will be stored in a dynamically allocated heap array. The size
* of one table descriptor is approximately 56 bytes.
*/
#ifndef UACPI_STATIC_TABLE_ARRAY_LEN
#define UACPI_STATIC_TABLE_ARRAY_LEN 16
#endif
UACPI_BUILD_BUG_ON_WITH_MSG(
UACPI_STATIC_TABLE_ARRAY_LEN < 1,
"configured static table array length is too small (expecting at least 1)"
);
#endif

View file

@ -0,0 +1,25 @@
#pragma once
#ifdef UACPI_OVERRIDE_LIBC
#include "uacpi_libc.h"
#else
/*
* The following libc functions are used internally by uACPI and have a default
* (sub-optimal) implementation:
* - memcpy
* - memset
* - memcmp
* - strcmp
* - memmove
* - strnlen
* - strlen
* - snprintf
* - vsnprintf
*
* In case your platform happens to implement optimized verisons of the helpers
* above, you are able to make uACPI use those instead by overriding them like so:
*
* #define uacpi_memcpy my_fast_memcpy
* #define uacpi_snprintf my_fast_snprintf
*/
#endif

View file

@ -0,0 +1,64 @@
#pragma once
/*
* Platform-specific types go here. This is the default placeholder using
* types from the standard headers.
*/
#ifdef UACPI_OVERRIDE_TYPES
#include "uacpi_types.h"
#else
#include <stdbool.h>
#include <stdint.h>
#include <stddef.h>
#include <stdarg.h>
#include <uacpi/helpers.h>
typedef uint8_t uacpi_u8;
typedef uint16_t uacpi_u16;
typedef uint32_t uacpi_u32;
typedef uint64_t uacpi_u64;
typedef int8_t uacpi_i8;
typedef int16_t uacpi_i16;
typedef int32_t uacpi_i32;
typedef int64_t uacpi_i64;
#define UACPI_TRUE true
#define UACPI_FALSE false
typedef bool uacpi_bool;
#define UACPI_NULL NULL
typedef uintptr_t uacpi_uintptr;
typedef uacpi_uintptr uacpi_virt_addr;
typedef size_t uacpi_size;
typedef va_list uacpi_va_list;
#define uacpi_va_start va_start
#define uacpi_va_end va_end
#define uacpi_va_arg va_arg
typedef char uacpi_char;
#define uacpi_offsetof offsetof
/*
* We use unsignd long long for 64-bit number formatting because 64-bit types
* don't have a standard way to format them. The inttypes.h header is not
* freestanding therefore it's not practical to force the user to define the
* corresponding PRI macros. Moreover, unsignd long long is required to be
* at least 64-bits as per C99.
*/
UACPI_BUILD_BUG_ON_WITH_MSG(
sizeof(unsigned long long) < 8,
"unsigned long long must be at least 64 bits large as per C99"
);
#define UACPI_PRIu64 "llu"
#define UACPI_PRIx64 "llx"
#define UACPI_PRIX64 "llX"
#define UACPI_FMT64(val) ((unsigned long long)(val))
#endif

View file

@ -0,0 +1,674 @@
#pragma once
#include <uacpi/types.h>
#ifdef __cplusplus
extern "C" {
#endif
typedef enum uacpi_resource_type {
UACPI_RESOURCE_TYPE_IRQ,
UACPI_RESOURCE_TYPE_EXTENDED_IRQ,
UACPI_RESOURCE_TYPE_DMA,
UACPI_RESOURCE_TYPE_FIXED_DMA,
UACPI_RESOURCE_TYPE_IO,
UACPI_RESOURCE_TYPE_FIXED_IO,
UACPI_RESOURCE_TYPE_ADDRESS16,
UACPI_RESOURCE_TYPE_ADDRESS32,
UACPI_RESOURCE_TYPE_ADDRESS64,
UACPI_RESOURCE_TYPE_ADDRESS64_EXTENDED,
UACPI_RESOURCE_TYPE_MEMORY24,
UACPI_RESOURCE_TYPE_MEMORY32,
UACPI_RESOURCE_TYPE_FIXED_MEMORY32,
UACPI_RESOURCE_TYPE_START_DEPENDENT,
UACPI_RESOURCE_TYPE_END_DEPENDENT,
// Up to 7 bytes
UACPI_RESOURCE_TYPE_VENDOR_SMALL,
// Up to 2^16 - 1 bytes
UACPI_RESOURCE_TYPE_VENDOR_LARGE,
UACPI_RESOURCE_TYPE_GENERIC_REGISTER,
UACPI_RESOURCE_TYPE_GPIO_CONNECTION,
// These must always be contiguous in this order
UACPI_RESOURCE_TYPE_SERIAL_I2C_CONNECTION,
UACPI_RESOURCE_TYPE_SERIAL_SPI_CONNECTION,
UACPI_RESOURCE_TYPE_SERIAL_UART_CONNECTION,
UACPI_RESOURCE_TYPE_SERIAL_CSI2_CONNECTION,
UACPI_RESOURCE_TYPE_PIN_FUNCTION,
UACPI_RESOURCE_TYPE_PIN_CONFIGURATION,
UACPI_RESOURCE_TYPE_PIN_GROUP,
UACPI_RESOURCE_TYPE_PIN_GROUP_FUNCTION,
UACPI_RESOURCE_TYPE_PIN_GROUP_CONFIGURATION,
UACPI_RESOURCE_TYPE_CLOCK_INPUT,
UACPI_RESOURCE_TYPE_END_TAG,
UACPI_RESOURCE_TYPE_MAX = UACPI_RESOURCE_TYPE_END_TAG,
} uacpi_resource_type;
typedef struct uacpi_resource_source {
uacpi_u8 index;
uacpi_bool index_present;
uacpi_u16 length;
uacpi_char *string;
} uacpi_resource_source;
/*
* This applies to IRQ & StartDependent resources only. The DONT_CARE value is
* used for deserialization into the AML format to signify that the serializer
* is allowed to optimize the length down if possible. Note that this is
* generally not allowed unless the resource is generated by the caller:
*
* -- ACPI 6.5 ------------------------------------------------------------
* The resource descriptors in the byte stream argument must be specified
* exactly as listed in the _CRS byte stream - meaning that the identical
* resource descriptors must appear in the identical order, resulting in a
* buffer of exactly the same length. Optimizations such as changing an
* IRQ descriptor to an IRQNoFlags descriptor (or vice-versa) must not be
* performed. Similarly, changing StartDependentFn to StartDependentFnNoPri
* is not allowed.
* ------------------------------------------------------------------------
*/
enum uacpi_resource_length_kind {
UACPI_RESOURCE_LENGTH_KIND_DONT_CARE = 0,
UACPI_RESOURCE_LENGTH_KIND_ONE_LESS,
UACPI_RESOURCE_LENGTH_KIND_FULL,
};
// triggering fields
#define UACPI_TRIGGERING_EDGE 1
#define UACPI_TRIGGERING_LEVEL 0
// polarity
#define UACPI_POLARITY_ACTIVE_HIGH 0
#define UACPI_POLARITY_ACTIVE_LOW 1
#define UACPI_POLARITY_ACTIVE_BOTH 2
// sharing
#define UACPI_EXCLUSIVE 0
#define UACPI_SHARED 1
// wake_capability
#define UACPI_WAKE_CAPABLE 1
#define UACPI_NOT_WAKE_CAPABLE 0
typedef struct uacpi_resource_irq {
uacpi_u8 length_kind;
uacpi_u8 triggering;
uacpi_u8 polarity;
uacpi_u8 sharing;
uacpi_u8 wake_capability;
uacpi_u8 num_irqs;
uacpi_u8 irqs[];
} uacpi_resource_irq;
typedef struct uacpi_resource_extended_irq {
uacpi_u8 direction;
uacpi_u8 triggering;
uacpi_u8 polarity;
uacpi_u8 sharing;
uacpi_u8 wake_capability;
uacpi_u8 num_irqs;
uacpi_resource_source source;
uacpi_u32 irqs[];
} uacpi_resource_extended_irq;
// transfer_type
#define UACPI_TRANSFER_TYPE_8_BIT 0b00
#define UACPI_TRANSFER_TYPE_8_AND_16_BIT 0b01
#define UACPI_TRANSFER_TYPE_16_BIT 0b10
// bus_master_status
#define UACPI_BUS_MASTER 0b1
// channel_speed
#define UACPI_DMA_COMPATIBILITY 0b00
#define UACPI_DMA_TYPE_A 0b01
#define UACPI_DMA_TYPE_B 0b10
#define UACPI_DMA_TYPE_F 0b11
// transfer_width
#define UACPI_TRANSFER_WIDTH_8 0x00
#define UACPI_TRANSFER_WIDTH_16 0x01
#define UACPI_TRANSFER_WIDTH_32 0x02
#define UACPI_TRANSFER_WIDTH_64 0x03
#define UACPI_TRANSFER_WIDTH_128 0x04
#define UACPI_TRANSFER_WIDTH_256 0x05
typedef struct uacpi_resource_dma {
uacpi_u8 transfer_type;
uacpi_u8 bus_master_status;
uacpi_u8 channel_speed;
uacpi_u8 num_channels;
uacpi_u8 channels[];
} uacpi_resource_dma;
typedef struct uacpi_resource_fixed_dma {
uacpi_u16 request_line;
uacpi_u16 channel;
uacpi_u8 transfer_width;
} uacpi_resource_fixed_dma;
// decode_type
#define UACPI_DECODE_16 0b1
#define UACPI_DECODE_10 0b0
typedef struct uacpi_resource_io {
uacpi_u8 decode_type;
uacpi_u16 minimum;
uacpi_u16 maximum;
uacpi_u8 alignment;
uacpi_u8 length;
} uacpi_resource_io;
typedef struct uacpi_resource_fixed_io {
uacpi_u16 address;
uacpi_u8 length;
} uacpi_resource_fixed_io;
// write_status
#define UACPI_NON_WRITABLE 0
#define UACPI_WRITABLE 1
// caching
#define UACPI_NON_CACHEABLE 0
#define UACPI_CACHEABLE 1
#define UACPI_CACHEABLE_WRITE_COMBINING 2
#define UACPI_PREFETCHABLE 3
// range_type
#define UACPI_RANGE_TYPE_MEMORY 0
#define UACPI_RANGE_TYPE_RESERVED 1
#define UACPI_RANGE_TYPE_ACPI 2
#define UACPI_RANGE_TYPE_NVS 3
// address_common->type
#define UACPI_RANGE_MEMORY 0
#define UACPI_RANGE_IO 1
#define UACPI_RANGE_BUS 2
// translation
#define UACPI_IO_MEM_TRANSLATION 1
#define UACPI_IO_MEM_STATIC 0
// translation_type
#define UACPI_TRANSLATION_DENSE 0
#define UACPI_TRANSLATION_SPARSE 1
// direction
#define UACPI_PRODUCER 0
#define UACPI_CONSUMER 1
// decode_type
#define UACPI_POISITIVE_DECODE 0
#define UACPI_SUBTRACTIVE_DECODE 1
// fixed_min_address & fixed_max_address
#define UACPI_ADDRESS_NOT_FIXED 0
#define UACPI_ADDRESS_FIXED 1
typedef struct uacpi_memory_attribute {
uacpi_u8 write_status;
uacpi_u8 caching;
uacpi_u8 range_type;
uacpi_u8 translation;
} uacpi_memory_attribute;
typedef struct uacpi_io_attribute {
uacpi_u8 range_type;
uacpi_u8 translation;
uacpi_u8 translation_type;
} uacpi_io_attribute;
typedef union uacpi_address_attribute {
uacpi_memory_attribute memory;
uacpi_io_attribute io;
uacpi_u8 type_specific;
} uacpi_address_attribute;
typedef struct uacpi_resource_address_common {
uacpi_address_attribute attribute;
uacpi_u8 type;
uacpi_u8 direction;
uacpi_u8 decode_type;
uacpi_u8 fixed_min_address;
uacpi_u8 fixed_max_address;
} uacpi_resource_address_common;
typedef struct uacpi_resource_address16 {
uacpi_resource_address_common common;
uacpi_u16 granularity;
uacpi_u16 minimum;
uacpi_u16 maximum;
uacpi_u16 translation_offset;
uacpi_u16 address_length;
uacpi_resource_source source;
} uacpi_resource_address16;
typedef struct uacpi_resource_address32 {
uacpi_resource_address_common common;
uacpi_u32 granularity;
uacpi_u32 minimum;
uacpi_u32 maximum;
uacpi_u32 translation_offset;
uacpi_u32 address_length;
uacpi_resource_source source;
} uacpi_resource_address32;
typedef struct uacpi_resource_address64 {
uacpi_resource_address_common common;
uacpi_u64 granularity;
uacpi_u64 minimum;
uacpi_u64 maximum;
uacpi_u64 translation_offset;
uacpi_u64 address_length;
uacpi_resource_source source;
} uacpi_resource_address64;
typedef struct uacpi_resource_address64_extended {
uacpi_resource_address_common common;
uacpi_u8 revision_id;
uacpi_u64 granularity;
uacpi_u64 minimum;
uacpi_u64 maximum;
uacpi_u64 translation_offset;
uacpi_u64 address_length;
uacpi_u64 attributes;
} uacpi_resource_address64_extended;
typedef struct uacpi_resource_memory24 {
uacpi_u8 write_status;
uacpi_u16 minimum;
uacpi_u16 maximum;
uacpi_u16 alignment;
uacpi_u16 length;
} uacpi_resource_memory24;
typedef struct uacpi_resource_memory32 {
uacpi_u8 write_status;
uacpi_u32 minimum;
uacpi_u32 maximum;
uacpi_u32 alignment;
uacpi_u32 length;
} uacpi_resource_memory32;
typedef struct uacpi_resource_fixed_memory32 {
uacpi_u8 write_status;
uacpi_u32 address;
uacpi_u32 length;
} uacpi_resource_fixed_memory32;
// compatibility & performance
#define UACPI_GOOD 0
#define UACPI_ACCEPTABLE 1
#define UACPI_SUB_OPTIMAL 2
typedef struct uacpi_resource_start_dependent {
uacpi_u8 length_kind;
uacpi_u8 compatibility;
uacpi_u8 performance;
} uacpi_resource_start_dependent;
typedef struct uacpi_resource_vendor_defined {
uacpi_u8 length;
uacpi_u8 data[];
} uacpi_resource_vendor;
typedef struct uacpi_resource_vendor_typed {
uacpi_u16 length;
uacpi_u8 sub_type;
uacpi_u8 uuid[16];
uacpi_u8 data[];
} uacpi_resource_vendor_typed;
typedef struct uacpi_resource_generic_register {
uacpi_u8 address_space_id;
uacpi_u8 bit_width;
uacpi_u8 bit_offset;
uacpi_u8 access_size;
uacpi_u64 address;
} uacpi_resource_generic_register;
// type
#define UACPI_GPIO_CONNECTION_INTERRUPT 0x00
#define UACPI_GPIO_CONNECTION_IO 0x01
typedef struct uacpi_interrupt_connection_flags {
uacpi_u8 triggering;
uacpi_u8 polarity;
uacpi_u8 sharing;
uacpi_u8 wake_capability;
} uacpi_interrupt_connection_flags;
// restriction
#define UACPI_IO_RESTRICTION_NONE 0x0
#define UACPI_IO_RESTRICTION_INPUT 0x1
#define UACPI_IO_RESTRICTION_OUTPUT 0x2
#define UACPI_IO_RESTRICTION_NONE_PRESERVE 0x3
typedef struct uacpi_io_connection_flags {
uacpi_u8 restriction;
uacpi_u8 sharing;
} uacpi_io_connection_flags;
// pull_configuration
#define UACPI_PIN_CONFIG_DEFAULT 0x00
#define UACPI_PIN_CONFIG_PULL_UP 0x01
#define UACPI_PIN_CONFIG_PULL_DOWN 0x02
#define UACPI_PIN_CONFIG_NO_PULL 0x03
typedef struct uacpi_resource_gpio_connection {
uacpi_u8 revision_id;
uacpi_u8 type;
uacpi_u8 direction;
union {
uacpi_interrupt_connection_flags interrupt;
uacpi_io_connection_flags io;
uacpi_u16 type_specific;
};
uacpi_u8 pull_configuration;
uacpi_u16 drive_strength;
uacpi_u16 debounce_timeout;
uacpi_u16 vendor_data_length;
uacpi_u16 pin_table_length;
uacpi_resource_source source;
uacpi_u16 *pin_table;
uacpi_u8 *vendor_data;
} uacpi_resource_gpio_connection;
// mode
#define UACPI_MODE_CONTROLLER_INITIATED 0x0
#define UACPI_MODE_DEVICE_INITIATED 0x1
typedef struct uacpi_resource_serial_bus_common {
uacpi_u8 revision_id;
uacpi_u8 type;
uacpi_u8 mode;
uacpi_u8 direction;
uacpi_u8 sharing;
uacpi_u8 type_revision_id;
uacpi_u16 type_data_length;
uacpi_u16 vendor_data_length;
uacpi_resource_source source;
uacpi_u8 *vendor_data;
} uacpi_resource_serial_bus_common;
// addressing_mode
#define UACPI_I2C_7BIT 0x0
#define UACPI_I2C_10BIT 0x1
typedef struct uacpi_resource_i2c_connection {
uacpi_resource_serial_bus_common common;
uacpi_u8 addressing_mode;
uacpi_u16 slave_address;
uacpi_u32 connection_speed;
} uacpi_resource_i2c_connection;
// wire_mode
#define UACPI_SPI_4_WIRES 0
#define UACPI_SPI_3_WIRES 1
// device_polarity
#define UACPI_SPI_ACTIVE_LOW 0
#define UACPI_SPI_ACTIVE_HIGH 1
// phase
#define UACPI_SPI_PHASE_FIRST 0
#define UACPI_SPI_PHASE_SECOND 0
// polarity
#define UACPI_SPI_START_LOW 0
#define UACPI_SPI_START_HIGH 1
typedef struct uacpi_resource_spi_connection {
uacpi_resource_serial_bus_common common;
uacpi_u8 wire_mode;
uacpi_u8 device_polarity;
uacpi_u8 data_bit_length;
uacpi_u8 phase;
uacpi_u8 polarity;
uacpi_u16 device_selection;
uacpi_u32 connection_speed;
} uacpi_resource_spi_connection;
// stop_bits
#define UACPI_UART_STOP_BITS_NONE 0b00
#define UACPI_UART_STOP_BITS_1 0b01
#define UACPI_UART_STOP_BITS_1_5 0b10
#define UACPI_UART_STOP_BITS_2 0b11
// data_bits
#define UACPI_UART_DATA_5BITS 0b000
#define UACPI_UART_DATA_6BITS 0b001
#define UACPI_UART_DATA_7BITS 0b010
#define UACPI_UART_DATA_8BITS 0b011
#define UACPI_UART_DATA_9BITS 0b100
// endianness
#define UACPI_UART_LITTLE_ENDIAN 0
#define UACPI_UART_BIG_ENDIAN 1
// parity
#define UACPI_UART_PARITY_NONE 0x00
#define UACPI_UART_PARITY_EVEN 0x01
#define UACPI_UART_PARITY_ODD 0x02
#define UACPI_UART_PARITY_MARK 0x03
#define UACPI_UART_PARITY_SPACE 0x04
// lines_enabled
#define UACPI_UART_DATA_CARRIER_DETECT (1 << 2)
#define UACPI_UART_RING_INDICATOR (1 << 3)
#define UACPI_UART_DATA_SET_READY (1 << 4)
#define UACPI_UART_DATA_TERMINAL_READY (1 << 5)
#define UACPI_UART_CLEAR_TO_SEND (1 << 6)
#define UACPI_UART_REQUEST_TO_SEND (1 << 7)
// flow_control
#define UACPI_UART_FLOW_CONTROL_NONE 0b00
#define UACPI_UART_FLOW_CONTROL_HW 0b01
#define UACPI_UART_FLOW_CONTROL_XON_XOFF 0b10
typedef struct uacpi_resource_uart_connection {
uacpi_resource_serial_bus_common common;
uacpi_u8 stop_bits;
uacpi_u8 data_bits;
uacpi_u8 endianness;
uacpi_u8 parity;
uacpi_u8 lines_enabled;
uacpi_u8 flow_control;
uacpi_u32 baud_rate;
uacpi_u16 rx_fifo;
uacpi_u16 tx_fifo;
} uacpi_resource_uart_connection;
// phy_type
#define UACPI_CSI2_PHY_C 0b00
#define UACPI_CSI2_PHY_D 0b01
typedef struct uacpi_resource_csi2_connection {
uacpi_resource_serial_bus_common common;
uacpi_u8 phy_type;
uacpi_u8 local_port;
} uacpi_resource_csi2_connection;
typedef struct uacpi_resource_pin_function {
uacpi_u8 revision_id;
uacpi_u8 sharing;
uacpi_u8 pull_configuration;
uacpi_u16 function_number;
uacpi_u16 pin_table_length;
uacpi_u16 vendor_data_length;
uacpi_resource_source source;
uacpi_u16 *pin_table;
uacpi_u8 *vendor_data;
} uacpi_resource_pin_function;
// type
#define UACPI_PIN_CONFIG_DEFAULT 0x00
#define UACPI_PIN_CONFIG_BIAS_PULL_UP 0x01
#define UACPI_PIN_CONFIG_BIAS_PULL_DOWN 0x02
#define UACPI_PIN_CONFIG_BIAS_DEFAULT 0x03
#define UACPI_PIN_CONFIG_BIAS_DISABLE 0x04
#define UACPI_PIN_CONFIG_BIAS_HIGH_IMPEDANCE 0x05
#define UACPI_PIN_CONFIG_BIAS_BUS_HOLD 0x06
#define UACPI_PIN_CONFIG_DRIVE_OPEN_DRAIN 0x07
#define UACPI_PIN_CONFIG_DRIVE_OPEN_SOURCE 0x08
#define UACPI_PIN_CONFIG_DRIVE_PUSH_PULL 0x09
#define UACPI_PIN_CONFIG_DRIVE_STRENGTH 0x0A
#define UACPI_PIN_CONFIG_SLEW_RATE 0x0B
#define UACPI_PIN_CONFIG_INPUT_DEBOUNCE 0x0C
#define UACPI_PIN_CONFIG_INPUT_SCHMITT_TRIGGER 0x0D
typedef struct uacpi_resource_pin_configuration {
uacpi_u8 revision_id;
uacpi_u8 sharing;
uacpi_u8 direction;
uacpi_u8 type;
uacpi_u32 value;
uacpi_u16 pin_table_length;
uacpi_u16 vendor_data_length;
uacpi_resource_source source;
uacpi_u16 *pin_table;
uacpi_u8 *vendor_data;
} uacpi_resource_pin_configuration;
typedef struct uacpi_resource_label {
uacpi_u16 length;
const uacpi_char *string;
} uacpi_resource_label;
typedef struct uacpi_resource_pin_group {
uacpi_u8 revision_id;
uacpi_u8 direction;
uacpi_u16 pin_table_length;
uacpi_u16 vendor_data_length;
uacpi_resource_label label;
uacpi_u16 *pin_table;
uacpi_u8 *vendor_data;
} uacpi_resource_pin_group;
typedef struct uacpi_resource_pin_group_function {
uacpi_u8 revision_id;
uacpi_u8 sharing;
uacpi_u8 direction;
uacpi_u16 function;
uacpi_u16 vendor_data_length;
uacpi_resource_source source;
uacpi_resource_label label;
uacpi_u8 *vendor_data;
} uacpi_resource_pin_group_function;
typedef struct uacpi_resource_pin_group_configuration {
uacpi_u8 revision_id;
uacpi_u8 sharing;
uacpi_u8 direction;
uacpi_u8 type;
uacpi_u32 value;
uacpi_u16 vendor_data_length;
uacpi_resource_source source;
uacpi_resource_label label;
uacpi_u8 *vendor_data;
} uacpi_resource_pin_group_configuration;
// scale
#define UACPI_SCALE_HZ 0b00
#define UACPI_SCALE_KHZ 0b01
#define UACPI_SCALE_MHZ 0b10
// frequency
#define UACPI_FREQUENCY_FIXED 0x0
#define UACPI_FREQUENCY_VARIABLE 0x1
typedef struct uacpi_resource_clock_input {
uacpi_u8 revision_id;
uacpi_u8 frequency;
uacpi_u8 scale;
uacpi_u16 divisor;
uacpi_u32 numerator;
uacpi_resource_source source;
} uacpi_resource_clock_input;
typedef struct uacpi_resource {
uacpi_u32 type;
uacpi_u32 length;
union {
uacpi_resource_irq irq;
uacpi_resource_extended_irq extended_irq;
uacpi_resource_dma dma;
uacpi_resource_fixed_dma fixed_dma;
uacpi_resource_io io;
uacpi_resource_fixed_io fixed_io;
uacpi_resource_address16 address16;
uacpi_resource_address32 address32;
uacpi_resource_address64 address64;
uacpi_resource_address64_extended address64_extended;
uacpi_resource_memory24 memory24;
uacpi_resource_memory32 memory32;
uacpi_resource_fixed_memory32 fixed_memory32;
uacpi_resource_start_dependent start_dependent;
uacpi_resource_vendor vendor;
uacpi_resource_vendor_typed vendor_typed;
uacpi_resource_generic_register generic_register;
uacpi_resource_gpio_connection gpio_connection;
uacpi_resource_serial_bus_common serial_bus_common;
uacpi_resource_i2c_connection i2c_connection;
uacpi_resource_spi_connection spi_connection;
uacpi_resource_uart_connection uart_connection;
uacpi_resource_csi2_connection csi2_connection;
uacpi_resource_pin_function pin_function;
uacpi_resource_pin_configuration pin_configuration;
uacpi_resource_pin_group pin_group;
uacpi_resource_pin_group_function pin_group_function;
uacpi_resource_pin_group_configuration pin_group_configuration;
uacpi_resource_clock_input clock_input;
};
} uacpi_resource;
#define UACPI_NEXT_RESOURCE(cur) \
((uacpi_resource*)((uacpi_u8*)(cur) + (cur)->length))
typedef struct uacpi_resources {
uacpi_size length;
uacpi_resource *entries;
} uacpi_resources;
void uacpi_free_resources(uacpi_resources*);
typedef uacpi_iteration_decision (*uacpi_resource_iteration_callback)
(void *user, uacpi_resource *resource);
uacpi_status uacpi_get_current_resources(
uacpi_namespace_node *device, uacpi_resources **out_resources
);
uacpi_status uacpi_get_possible_resources(
uacpi_namespace_node *device, uacpi_resources **out_resources
);
uacpi_status uacpi_set_resources(
uacpi_namespace_node *device, uacpi_resources *resources
);
uacpi_status uacpi_for_each_resource(
uacpi_resources *resources, uacpi_resource_iteration_callback cb, void *user
);
uacpi_status uacpi_for_each_device_resource(
uacpi_namespace_node *device, const uacpi_char *method,
uacpi_resource_iteration_callback cb, void *user
);
#ifdef __cplusplus
}
#endif

63
src/include/uacpi/sleep.h Normal file
View file

@ -0,0 +1,63 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/status.h>
#include <uacpi/uacpi.h>
#ifdef __cplusplus
extern "C" {
#endif
/*
* Set the firmware waking vector in FACS.
*
* 'addr32' is the real mode entry-point address
* 'addr64' is the protected mode entry-point address
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_set_waking_vector(
uacpi_phys_addr addr32, uacpi_phys_addr addr64
))
typedef enum uacpi_sleep_state {
UACPI_SLEEP_STATE_S0 = 0,
UACPI_SLEEP_STATE_S1,
UACPI_SLEEP_STATE_S2,
UACPI_SLEEP_STATE_S3,
UACPI_SLEEP_STATE_S4,
UACPI_SLEEP_STATE_S5,
UACPI_SLEEP_STATE_MAX = UACPI_SLEEP_STATE_S5,
} uacpi_sleep_state;
/*
* Prepare for a given sleep state.
* Must be caled with interrupts ENABLED.
*/
uacpi_status uacpi_prepare_for_sleep_state(uacpi_sleep_state);
/*
* Enter the given sleep state after preparation.
* Must be called with interrupts DISABLED.
*/
uacpi_status uacpi_enter_sleep_state(uacpi_sleep_state);
/*
* Prepare to leave the given sleep state.
* Must be called with interrupts DISABLED.
*/
uacpi_status uacpi_prepare_for_wake_from_sleep_state(uacpi_sleep_state);
/*
* Wake from the given sleep state.
* Must be called with interrupts ENABLED.
*/
uacpi_status uacpi_wake_from_sleep_state(uacpi_sleep_state);
/*
* Attempt reset via the FADT reset register.
*/
uacpi_status uacpi_reboot(void);
#ifdef __cplusplus
}
#endif

View file

@ -0,0 +1,57 @@
#pragma once
#include <uacpi/internal/compiler.h>
#include <uacpi/platform/types.h>
#ifdef __cplusplus
extern "C" {
#endif
typedef enum uacpi_status {
UACPI_STATUS_OK = 0,
UACPI_STATUS_MAPPING_FAILED = 1,
UACPI_STATUS_OUT_OF_MEMORY = 2,
UACPI_STATUS_BAD_CHECKSUM = 3,
UACPI_STATUS_INVALID_SIGNATURE = 4,
UACPI_STATUS_INVALID_TABLE_LENGTH = 5,
UACPI_STATUS_NOT_FOUND = 6,
UACPI_STATUS_INVALID_ARGUMENT = 7,
UACPI_STATUS_UNIMPLEMENTED = 8,
UACPI_STATUS_ALREADY_EXISTS = 9,
UACPI_STATUS_INTERNAL_ERROR = 10,
UACPI_STATUS_TYPE_MISMATCH = 11,
UACPI_STATUS_INIT_LEVEL_MISMATCH = 12,
UACPI_STATUS_NAMESPACE_NODE_DANGLING = 13,
UACPI_STATUS_NO_HANDLER = 14,
UACPI_STATUS_NO_RESOURCE_END_TAG = 15,
UACPI_STATUS_COMPILED_OUT = 16,
UACPI_STATUS_HARDWARE_TIMEOUT = 17,
UACPI_STATUS_TIMEOUT = 18,
UACPI_STATUS_OVERRIDDEN = 19,
UACPI_STATUS_DENIED = 20,
// All errors that have bytecode-related origin should go here
UACPI_STATUS_AML_UNDEFINED_REFERENCE = 0x0EFF0000,
UACPI_STATUS_AML_INVALID_NAMESTRING = 0x0EFF0001,
UACPI_STATUS_AML_OBJECT_ALREADY_EXISTS = 0x0EFF0002,
UACPI_STATUS_AML_INVALID_OPCODE = 0x0EFF0003,
UACPI_STATUS_AML_INCOMPATIBLE_OBJECT_TYPE = 0x0EFF0004,
UACPI_STATUS_AML_BAD_ENCODING = 0x0EFF0005,
UACPI_STATUS_AML_OUT_OF_BOUNDS_INDEX = 0x0EFF0006,
UACPI_STATUS_AML_SYNC_LEVEL_TOO_HIGH = 0x0EFF0007,
UACPI_STATUS_AML_INVALID_RESOURCE = 0x0EFF0008,
UACPI_STATUS_AML_LOOP_TIMEOUT = 0x0EFF0009,
UACPI_STATUS_AML_CALL_STACK_DEPTH_LIMIT = 0x0EFF000A,
} uacpi_status;
const uacpi_char *uacpi_status_to_string(uacpi_status);
#define uacpi_unlikely_error(expr) uacpi_unlikely((expr) != UACPI_STATUS_OK)
#define uacpi_likely_error(expr) uacpi_likely((expr) != UACPI_STATUS_OK)
#define uacpi_unlikely_success(expr) uacpi_unlikely((expr) == UACPI_STATUS_OK)
#define uacpi_likely_success(expr) uacpi_likely((expr) == UACPI_STATUS_OK)
#ifdef __cplusplus
}
#endif

130
src/include/uacpi/tables.h Normal file
View file

@ -0,0 +1,130 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/status.h>
#ifdef __cplusplus
extern "C" {
#endif
// Forward-declared to avoid including the entire acpi.h here
struct acpi_fadt;
typedef struct uacpi_table_identifiers {
uacpi_object_name signature;
// if oemid[0] == 0 this field is ignored
char oemid[6];
// if oem_table_id[0] == 0 this field is ignored
char oem_table_id[8];
} uacpi_table_identifiers;
typedef struct uacpi_table {
union {
uacpi_virt_addr virt_addr;
void *ptr;
struct acpi_sdt_hdr *hdr;
};
// Index number used to identify this table internally
uacpi_size index;
} uacpi_table;
/*
* Install a table from either a virtual or a physical address.
* The table is simply stored in the internal table array, and not loaded by
* the interpreter (see uacpi_table_load).
*
* The table is optionally returned via 'out_table'.
*
* Manual calls to uacpi_table_install are not subject to filtering via the
* table installation callback (if any).
*/
uacpi_status uacpi_table_install(
void*, uacpi_table *out_table
);
uacpi_status uacpi_table_install_physical(
uacpi_phys_addr, uacpi_table *out_table
);
/*
* Load a previously installed table by feeding it to the interpreter.
*/
uacpi_status uacpi_table_load(uacpi_size index);
/*
* Helpers for finding tables.
*
* NOTE:
* The returned table's reference count is incremented by 1, which keeps its
* mapping alive forever unless uacpi_table_unref() is called for this table
* later on. Calling uacpi_table_find_next_with_same_signature() on a table also
* drops its reference count by 1, so if you want to keep it mapped you must
* manually call uacpi_table_ref() beforehand.
*/
uacpi_status uacpi_table_find_by_signature(
const uacpi_char *signature, uacpi_table *out_table
);
uacpi_status uacpi_table_find_next_with_same_signature(
uacpi_table *in_out_table
);
uacpi_status uacpi_table_find(
const uacpi_table_identifiers *id, uacpi_table *out_table
);
/*
* Increment/decrement a table's reference count.
* The table is unmapped when the reference count drops to 0.
*/
uacpi_status uacpi_table_ref(uacpi_table*);
uacpi_status uacpi_table_unref(uacpi_table*);
/*
* Returns the pointer to a sanitized internal version of FADT.
*
* The revision is guaranteed to be correct. All of the registers are converted
* to GAS format. Fields that might contain garbage are cleared.
*/
uacpi_status uacpi_table_fadt(struct acpi_fadt**);
typedef enum uacpi_table_installation_disposition {
// Allow the table to be installed as-is
UACPI_TABLE_INSTALLATION_DISPOSITON_ALLOW = 0,
/*
* Deny the table from being installed completely. This is useful for
* debugging various problems, e.g. AML loading bad SSDTs that cause the
* system to hang or enter an undesired state.
*/
UACPI_TABLE_INSTALLATION_DISPOSITON_DENY,
/*
* Override the table being installed with the table at the virtual address
* returned in 'out_override_address'.
*/
UACPI_TABLE_INSTALLATION_DISPOSITON_VIRTUAL_OVERRIDE,
/*
* Override the table being installed with the table at the physical address
* returned in 'out_override_address'.
*/
UACPI_TABLE_INSTALLATION_DISPOSITON_PHYSICAL_OVERRIDE,
} uacpi_table_installation_disposition;
typedef uacpi_table_installation_disposition (*uacpi_table_installation_handler)
(struct acpi_sdt_hdr *hdr, uacpi_u64 *out_override_address);
/*
* Set a handler that is invoked for each table before it gets installed.
*
* Depending on the return value, the table is either allowed to be installed
* as-is, denied, or overriden with a new one.
*/
uacpi_status uacpi_set_table_installation_handler(
uacpi_table_installation_handler handler
);
#ifdef __cplusplus
}
#endif

458
src/include/uacpi/types.h Normal file
View file

@ -0,0 +1,458 @@
#pragma once
#include <uacpi/platform/types.h>
#include <uacpi/platform/compiler.h>
#include <uacpi/platform/arch_helpers.h>
#include <uacpi/status.h>
#ifdef __cplusplus
extern "C" {
#endif
typedef enum uacpi_init_level {
// Reboot state, nothing is available
UACPI_INIT_LEVEL_EARLY = 0,
/*
* State after a successfull call to uacpi_initialize. Table API and
* other helpers that don't depend on the ACPI namespace may be used.
*/
UACPI_INIT_LEVEL_SUBSYSTEM_INITIALIZED = 1,
/*
* State after a successfull call to uacpi_namespace_load. Most API may be
* used, namespace can be iterated, etc.
*/
UACPI_INIT_LEVEL_NAMESPACE_LOADED = 2,
/*
* The final initialization stage, this is entered after the call to
* uacpi_namespace_initialize. All API is available to use.
*/
UACPI_INIT_LEVEL_NAMESPACE_INITIALIZED = 3,
} uacpi_init_level;
typedef enum uacpi_log_level {
/*
* Super verbose logging, every op & uop being processed is logged.
* Mostly useful for tracking down hangs/lockups.
*/
UACPI_LOG_DEBUG = 5,
/*
* A little verbose, every operation region access is traced with a bit of
* extra information on top.
*/
UACPI_LOG_TRACE = 4,
/*
* Only logs the bare minimum information about state changes and/or
* initialization progress.
*/
UACPI_LOG_INFO = 3,
/*
* Logs recoverable errors and/or non-important aborts.
*/
UACPI_LOG_WARN = 2,
/*
* Logs only critical errors that might affect the ability to initialize or
* prevent stable runtime.
*/
UACPI_LOG_ERROR = 1,
} uacpi_log_level;
#if UACPI_POINTER_SIZE == 4 && defined(UACPI_PHYS_ADDR_IS_32BITS)
typedef uacpi_u32 uacpi_phys_addr;
typedef uacpi_u32 uacpi_io_addr;
#else
typedef uacpi_u64 uacpi_phys_addr;
typedef uacpi_u64 uacpi_io_addr;
#endif
typedef struct uacpi_pci_address {
uacpi_u16 segment;
uacpi_u8 bus;
uacpi_u8 device;
uacpi_u8 function;
} uacpi_pci_address;
typedef struct uacpi_data_view {
union {
uacpi_u8 *bytes;
const uacpi_u8 *const_bytes;
uacpi_char *text;
const uacpi_char *const_text;
};
uacpi_size length;
} uacpi_data_view;
typedef void *uacpi_handle;
typedef struct uacpi_namespace_node uacpi_namespace_node;
typedef enum uacpi_object_type {
UACPI_OBJECT_UNINITIALIZED = 0,
UACPI_OBJECT_INTEGER = 1,
UACPI_OBJECT_STRING = 2,
UACPI_OBJECT_BUFFER = 3,
UACPI_OBJECT_PACKAGE = 4,
UACPI_OBJECT_FIELD_UNIT = 5,
UACPI_OBJECT_DEVICE = 6,
UACPI_OBJECT_EVENT = 7,
UACPI_OBJECT_METHOD = 8,
UACPI_OBJECT_MUTEX = 9,
UACPI_OBJECT_OPERATION_REGION = 10,
UACPI_OBJECT_POWER_RESOURCE = 11,
UACPI_OBJECT_PROCESSOR = 12,
UACPI_OBJECT_THERMAL_ZONE = 13,
UACPI_OBJECT_BUFFER_FIELD = 14,
UACPI_OBJECT_DEBUG = 16,
UACPI_OBJECT_REFERENCE = 20,
UACPI_OBJECT_BUFFER_INDEX = 21,
UACPI_OBJECT_MAX_TYPE_VALUE = UACPI_OBJECT_BUFFER_INDEX
} uacpi_object_type;
// Type bits for API requiring a bit mask, e.g. uacpi_eval_typed
typedef enum uacpi_object_type_bits {
UACPI_OBJECT_INTEGER_BIT = (1 << UACPI_OBJECT_INTEGER),
UACPI_OBJECT_STRING_BIT = (1 << UACPI_OBJECT_STRING),
UACPI_OBJECT_BUFFER_BIT = (1 << UACPI_OBJECT_BUFFER),
UACPI_OBJECT_PACKAGE_BIT = (1 << UACPI_OBJECT_PACKAGE),
UACPI_OBJECT_FIELD_UNIT_BIT = (1 << UACPI_OBJECT_FIELD_UNIT),
UACPI_OBJECT_DEVICE_BIT = (1 << UACPI_OBJECT_DEVICE),
UACPI_OBJECT_EVENT_BIT = (1 << UACPI_OBJECT_EVENT),
UACPI_OBJECT_METHOD_BIT = (1 << UACPI_OBJECT_METHOD),
UACPI_OBJECT_MUTEX_BIT = (1 << UACPI_OBJECT_MUTEX),
UACPI_OBJECT_OPERATION_REGION_BIT = (1 << UACPI_OBJECT_OPERATION_REGION),
UACPI_OBJECT_POWER_RESOURCE_BIT = (1 << UACPI_OBJECT_POWER_RESOURCE),
UACPI_OBJECT_PROCESSOR_BIT = (1 << UACPI_OBJECT_PROCESSOR),
UACPI_OBJECT_THERMAL_ZONE_BIT = (1 << UACPI_OBJECT_THERMAL_ZONE),
UACPI_OBJECT_BUFFER_FIELD_BIT = (1 << UACPI_OBJECT_BUFFER_FIELD),
UACPI_OBJECT_DEBUG_BIT = (1 << UACPI_OBJECT_DEBUG),
UACPI_OBJECT_REFERENCE_BIT = (1 << UACPI_OBJECT_REFERENCE),
UACPI_OBJECT_BUFFER_INDEX_BIT = (1 << UACPI_OBJECT_BUFFER_INDEX),
UACPI_OBJECT_ANY_BIT = 0xFFFFFFFF,
} uacpi_object_type_bits;
typedef struct uacpi_object uacpi_object;
void uacpi_object_ref(uacpi_object *obj);
void uacpi_object_unref(uacpi_object *obj);
uacpi_object_type uacpi_object_get_type(uacpi_object*);
uacpi_object_type_bits uacpi_object_get_type_bit(uacpi_object*);
/*
* Returns UACPI_TRUE if the provided object's type matches this type.
*/
uacpi_bool uacpi_object_is(uacpi_object*, uacpi_object_type);
/*
* Returns UACPI_TRUE if the provided object's type is one of the values
* specified in the 'type_mask' of UACPI_OBJECT_*_BIT.
*/
uacpi_bool uacpi_object_is_one_of(
uacpi_object*, uacpi_object_type_bits type_mask
);
const uacpi_char *uacpi_object_type_to_string(uacpi_object_type);
/*
* Create an uninitialized object. The object can be further overwritten via
* uacpi_object_assign_* to anything.
*/
uacpi_object *uacpi_object_create_uninitialized(void);
/*
* Create an integer object with the value provided.
*/
uacpi_object *uacpi_object_create_integer(uacpi_u64);
typedef enum uacpi_overflow_behavior {
UACPI_OVERFLOW_ALLOW = 0,
UACPI_OVERFLOW_TRUNCATE,
UACPI_OVERFLOW_DISALLOW,
} uacpi_overflow_behavior;
/*
* Same as uacpi_object_create_integer, but introduces additional ways to
* control what happens if the provided integer is larger than 32-bits, and the
* AML code expects 32-bit integers.
*
* - UACPI_OVERFLOW_ALLOW -> do nothing, same as the vanilla helper
* - UACPI_OVERFLOW_TRUNCATE -> truncate the integer to 32-bits if it happens to
* be larger than allowed by the DSDT
* - UACPI_OVERFLOW_DISALLOW -> fail object creation with
* UACPI_STATUS_INVALID_ARGUMENT if the provided
* value happens to be too large
*/
uacpi_status uacpi_object_create_integer_safe(
uacpi_u64, uacpi_overflow_behavior, uacpi_object **out_obj
);
uacpi_status uacpi_object_assign_integer(uacpi_object*, uacpi_u64 value);
uacpi_status uacpi_object_get_integer(uacpi_object*, uacpi_u64 *out);
/*
* Create a string/buffer object. Takes in a constant view of the data.
*
* NOTE: The data is copied to a separately allocated buffer and is not taken
* ownership of.
*/
uacpi_object *uacpi_object_create_string(uacpi_data_view);
uacpi_object *uacpi_object_create_cstring(const uacpi_char*);
uacpi_object *uacpi_object_create_buffer(uacpi_data_view);
/*
* Returns a writable view of the data stored in the string or buffer type
* object.
*/
uacpi_status uacpi_object_get_string_or_buffer(
uacpi_object*, uacpi_data_view *out
);
uacpi_status uacpi_object_get_string(uacpi_object*, uacpi_data_view *out);
uacpi_status uacpi_object_get_buffer(uacpi_object*, uacpi_data_view *out);
/*
* Returns UACPI_TRUE if the provided string object is actually an AML namepath.
*
* This can only be the case for package elements. If a package element is
* specified as a path to an object in AML, it's not resolved by the interpreter
* right away as it might not have been defined at that point yet, and is
* instead stored as a special string object to be resolved by client code
* when needed.
*
* Example usage:
* uacpi_namespace_node *target_node = UACPI_NULL;
*
* uacpi_object *obj = UACPI_NULL;
* uacpi_eval(scope, path, UACPI_NULL, &obj);
*
* uacpi_object_array arr;
* uacpi_object_get_package(obj, &arr);
*
* if (uacpi_object_is_aml_namepath(arr.objects[0])) {
* uacpi_object_resolve_as_aml_namepath(
* arr.objects[0], scope, &target_node
* );
* }
*/
uacpi_bool uacpi_object_is_aml_namepath(uacpi_object*);
/*
* Resolve an AML namepath contained in a string object.
*
* This is only applicable to objects that are package elements. See an
* explanation of how this works in the comment above the declaration of
* uacpi_object_is_aml_namepath.
*
* This is a shorthand for:
* uacpi_data_view view;
* uacpi_object_get_string(object, &view);
*
* target_node = uacpi_namespace_node_resolve_from_aml_namepath(
* scope, view.text
* );
*/
uacpi_status uacpi_object_resolve_as_aml_namepath(
uacpi_object*, uacpi_namespace_node *scope, uacpi_namespace_node **out_node
);
/*
* Make the provided object a string/buffer.
* Takes in a constant view of the data to be stored in the object.
*
* NOTE: The data is copied to a separately allocated buffer and is not taken
* ownership of.
*/
uacpi_status uacpi_object_assign_string(uacpi_object*, uacpi_data_view in);
uacpi_status uacpi_object_assign_buffer(uacpi_object*, uacpi_data_view in);
typedef struct uacpi_object_array {
uacpi_object **objects;
uacpi_size count;
} uacpi_object_array;
/*
* Create a package object and store all of the objects in the array inside.
* The array is allowed to be empty.
*
* NOTE: the reference count of each object is incremented before being stored
* in the object. Client code must remove all of the locally created
* references at its own discretion.
*/
uacpi_object *uacpi_object_create_package(uacpi_object_array in);
/*
* Returns the list of objects stored in a package object.
*
* NOTE: the reference count of the objects stored inside is not incremented,
* which means destorying/overwriting the object also potentially destroys
* all of the objects stored inside unless the reference count is
* incremented by the client via uacpi_object_ref.
*/
uacpi_status uacpi_object_get_package(uacpi_object*, uacpi_object_array *out);
/*
* Make the provided object a package and store all of the objects in the array
* inside. The array is allowed to be empty.
*
* NOTE: the reference count of each object is incremented before being stored
* in the object. Client code must remove all of the locally created
* references at its own discretion.
*/
uacpi_status uacpi_object_assign_package(uacpi_object*, uacpi_object_array in);
/*
* Create a reference object and make it point to 'child'.
*
* NOTE: child's reference count is incremented by one. Client code must remove
* all of the locally created references at its own discretion.
*/
uacpi_object *uacpi_object_create_reference(uacpi_object *child);
/*
* Make the provided object a reference and make it point to 'child'.
*
* NOTE: child's reference count is incremented by one. Client code must remove
* all of the locally created references at its own discretion.
*/
uacpi_status uacpi_object_assign_reference(uacpi_object*, uacpi_object *child);
/*
* Retrieve the object pointed to by a reference object.
*
* NOTE: the reference count of the returned object is incremented by one and
* must be uacpi_object_unref'ed by the client when no longer needed.
*/
uacpi_status uacpi_object_get_dereferenced(uacpi_object*, uacpi_object **out);
typedef struct uacpi_processor_info {
uacpi_u8 id;
uacpi_u32 block_address;
uacpi_u8 block_length;
} uacpi_processor_info;
/*
* Returns the information about the provided processor object.
*/
uacpi_status uacpi_object_get_processor_info(
uacpi_object*, uacpi_processor_info *out
);
typedef struct uacpi_power_resource_info {
uacpi_u8 system_level;
uacpi_u16 resource_order;
} uacpi_power_resource_info;
/*
* Returns the information about the provided power resource object.
*/
uacpi_status uacpi_object_get_power_resource_info(
uacpi_object*, uacpi_power_resource_info *out
);
typedef enum uacpi_region_op {
UACPI_REGION_OP_ATTACH = 1,
UACPI_REGION_OP_READ = 2,
UACPI_REGION_OP_WRITE = 3,
UACPI_REGION_OP_DETACH = 4,
} uacpi_region_op;
typedef struct uacpi_region_attach_data {
void *handler_context;
uacpi_namespace_node *region_node;
void *out_region_context;
} uacpi_region_attach_data;
typedef struct uacpi_region_rw_data {
void *handler_context;
void *region_context;
union {
uacpi_phys_addr address;
uacpi_u64 offset;
};
uacpi_u64 value;
uacpi_u8 byte_width;
} uacpi_region_rw_data;
typedef struct uacpi_region_detach_data {
void *handler_context;
void *region_context;
uacpi_namespace_node *region_node;
} uacpi_region_detach_data;
typedef uacpi_status (*uacpi_region_handler)
(uacpi_region_op op, uacpi_handle op_data);
typedef uacpi_status (*uacpi_notify_handler)
(uacpi_handle context, uacpi_namespace_node *node, uacpi_u64 value);
typedef enum uacpi_address_space {
UACPI_ADDRESS_SPACE_SYSTEM_MEMORY = 0,
UACPI_ADDRESS_SPACE_SYSTEM_IO = 1,
UACPI_ADDRESS_SPACE_PCI_CONFIG = 2,
UACPI_ADDRESS_SPACE_EMBEDDED_CONTROLLER = 3,
UACPI_ADDRESS_SPACE_SMBUS = 4,
UACPI_ADDRESS_SPACE_SYSTEM_CMOS = 5,
UACPI_ADDRESS_SPACE_PCI_BAR_TARGET = 6,
UACPI_ADDRESS_SPACE_IPMI = 7,
UACPI_ADDRESS_SPACE_GENERAL_PURPOSE_IO = 8,
UACPI_ADDRESS_SPACE_GENERIC_SERIAL_BUS = 9,
UACPI_ADDRESS_SPACE_PCC = 0x0A,
UACPI_ADDRESS_SPACE_PRM = 0x0B,
UACPI_ADDRESS_SPACE_FFIXEDHW = 0x7F,
// Internal type
UACPI_ADDRESS_SPACE_TABLE_DATA = 0xDA1A,
} uacpi_address_space;
const uacpi_char *uacpi_address_space_to_string(uacpi_address_space space);
typedef union uacpi_object_name {
uacpi_char text[4];
uacpi_u32 id;
} uacpi_object_name;
typedef enum uacpi_firmware_request_type {
UACPI_FIRMWARE_REQUEST_TYPE_BREAKPOINT,
UACPI_FIRMWARE_REQUEST_TYPE_FATAL,
} uacpi_firmware_request_type;
typedef struct uacpi_firmware_request {
uacpi_u8 type;
union {
// UACPI_FIRMWARE_REQUEST_BREAKPOINT
struct {
// The context of the method currently being executed
uacpi_handle ctx;
} breakpoint;
// UACPI_FIRMWARE_REQUEST_FATAL
struct {
uacpi_u8 type;
uacpi_u32 code;
uacpi_u64 arg;
} fatal;
};
} uacpi_firmware_request;
#define UACPI_INTERRUPT_NOT_HANDLED 0
#define UACPI_INTERRUPT_HANDLED 1
typedef uacpi_u32 uacpi_interrupt_ret;
typedef uacpi_interrupt_ret (*uacpi_interrupt_handler)(uacpi_handle);
typedef enum uacpi_iteration_decision {
UACPI_ITERATION_DECISION_CONTINUE = 0,
UACPI_ITERATION_DECISION_BREAK,
// Only applicable for uacpi_namespace_for_each_child
UACPI_ITERATION_DECISION_NEXT_PEER,
} uacpi_iteration_decision;
#ifdef __cplusplus
}
#endif

258
src/include/uacpi/uacpi.h Normal file
View file

@ -0,0 +1,258 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/status.h>
#include <uacpi/kernel_api.h>
#include <uacpi/namespace.h>
#ifdef UACPI_REDUCED_HARDWARE
#define UACPI_MAKE_STUB_FOR_REDUCED_HARDWARE(fn, ret) \
UACPI_NO_UNUSED_PARAMETER_WARNINGS_BEGIN \
static inline fn { return ret; } \
UACPI_NO_UNUSED_PARAMETER_WARNINGS_END
#define UACPI_STUB_IF_REDUCED_HARDWARE(fn) \
UACPI_MAKE_STUB_FOR_REDUCED_HARDWARE(fn,)
#define UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(fn) \
UACPI_MAKE_STUB_FOR_REDUCED_HARDWARE(fn, UACPI_STATUS_COMPILED_OUT)
#define UACPI_ALWAYS_OK_FOR_REDUCED_HARDWARE(fn) \
UACPI_MAKE_STUB_FOR_REDUCED_HARDWARE(fn, UACPI_STATUS_OK)
#else
#define UACPI_STUB_IF_REDUCED_HARDWARE(fn) fn;
#define UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(fn) fn;
#define UACPI_ALWAYS_OK_FOR_REDUCED_HARDWARE(fn) fn;
#endif
#ifdef __cplusplus
extern "C" {
#endif
/*
* Set up early access to the table subsystem. What this means is:
* - uacpi_table_find() and similar API becomes usable before the call to
* uacpi_initialize().
* - No kernel API besides logging and map/unmap will be invoked at this stage,
* allowing for heap and scheduling to still be fully offline.
* - The provided 'temporary_buffer' will be used as a temporary storage for the
* internal metadata about the tables (list, reference count, addresses,
* sizes, etc).
* - The 'temporary_buffer' is replaced with a normal heap buffer allocated via
* uacpi_kernel_alloc() after the call to uacpi_initialize() and can therefore
* be reclaimed by the kernel.
*
* The approximate overhead per table is 56 bytes, so a buffer of 4096 bytes
* yields about 73 tables in terms of capacity. uACPI also has an internal
* static buffer for tables, "UACPI_STATIC_TABLE_ARRAY_LEN", which is configured
* as 16 descriptors in length by default.
*/
uacpi_status uacpi_setup_early_table_access(
void *temporary_buffer, uacpi_size buffer_size
);
/*
* Bad table checksum should be considered a fatal error
* (table load is fully aborted in this case)
*/
#define UACPI_FLAG_BAD_CSUM_FATAL (1ull << 0)
/*
* Unexpected table signature should be considered a fatal error
* (table load is fully aborted in this case)
*/
#define UACPI_FLAG_BAD_TBL_SIGNATURE_FATAL (1ull << 1)
/*
* Force uACPI to use RSDT even for later revisions
*/
#define UACPI_FLAG_BAD_XSDT (1ull << 2)
/*
* If this is set, ACPI mode is not entered during the call to
* uacpi_initialize. The caller is expected to enter it later at their own
* discretion by using uacpi_enter_acpi_mode().
*/
#define UACPI_FLAG_NO_ACPI_MODE (1ull << 3)
/*
* Don't create the \_OSI method when building the namespace.
* Only enable this if you're certain that having this method breaks your AML
* blob, a more atomic/granular interface management is available via osi.h
*/
#define UACPI_FLAG_NO_OSI (1ull << 4)
/*
* Validate table checksums at installation time instead of first use.
* Note that this makes uACPI map the entire table at once, which not all
* hosts are able to handle at early init.
*/
#define UACPI_FLAG_PROACTIVE_TBL_CSUM (1ull << 5)
/*
* Initializes the uACPI subsystem, iterates & records all relevant RSDT/XSDT
* tables. Enters ACPI mode.
*
* 'flags' is any combination of UACPI_FLAG_* above
*/
uacpi_status uacpi_initialize(uacpi_u64 flags);
/*
* Parses & executes all of the DSDT/SSDT tables.
* Initializes the event subsystem.
*/
uacpi_status uacpi_namespace_load(void);
/*
* Initializes all the necessary objects in the namespaces by calling
* _STA/_INI etc.
*/
uacpi_status uacpi_namespace_initialize(void);
// Returns the current subsystem initialization level
uacpi_init_level uacpi_get_current_init_level(void);
/*
* Evaluate an object within the namespace and get back its value.
* Either root or path must be valid.
* A value of NULL for 'parent' implies uacpi_namespace_root() relative
* lookups, unless 'path' is already absolute.
*/
uacpi_status uacpi_eval(
uacpi_namespace_node *parent, const uacpi_char *path,
const uacpi_object_array *args, uacpi_object **ret
);
uacpi_status uacpi_eval_simple(
uacpi_namespace_node *parent, const uacpi_char *path, uacpi_object **ret
);
/*
* Same as uacpi_eval() but without a return value.
*/
uacpi_status uacpi_execute(
uacpi_namespace_node *parent, const uacpi_char *path,
const uacpi_object_array *args
);
uacpi_status uacpi_execute_simple(
uacpi_namespace_node *parent, const uacpi_char *path
);
/*
* Same as uacpi_eval, but the return value type is validated against
* the 'ret_mask'. UACPI_STATUS_TYPE_MISMATCH is returned on error.
*/
uacpi_status uacpi_eval_typed(
uacpi_namespace_node *parent, const uacpi_char *path,
const uacpi_object_array *args, uacpi_object_type_bits ret_mask,
uacpi_object **ret
);
uacpi_status uacpi_eval_simple_typed(
uacpi_namespace_node *parent, const uacpi_char *path,
uacpi_object_type_bits ret_mask, uacpi_object **ret
);
/*
* A shorthand for uacpi_eval_typed with UACPI_OBJECT_INTEGER_BIT.
*/
uacpi_status uacpi_eval_integer(
uacpi_namespace_node *parent, const uacpi_char *path,
const uacpi_object_array *args, uacpi_u64 *out_value
);
uacpi_status uacpi_eval_simple_integer(
uacpi_namespace_node *parent, const uacpi_char *path, uacpi_u64 *out_value
);
/*
* A shorthand for uacpi_eval_typed with
* UACPI_OBJECT_BUFFER_BIT | UACPI_OBJECT_STRING_BIT
*
* Use uacpi_object_get_string_or_buffer to retrieve the resulting buffer data.
*/
uacpi_status uacpi_eval_buffer_or_string(
uacpi_namespace_node *parent, const uacpi_char *path,
const uacpi_object_array *args, uacpi_object **ret
);
uacpi_status uacpi_eval_simple_buffer_or_string(
uacpi_namespace_node *parent, const uacpi_char *path, uacpi_object **ret
);
/*
* A shorthand for uacpi_eval_typed with UACPI_OBJECT_STRING_BIT.
*
* Use uacpi_object_get_string to retrieve the resulting buffer data.
*/
uacpi_status uacpi_eval_string(
uacpi_namespace_node *parent, const uacpi_char *path,
const uacpi_object_array *args, uacpi_object **ret
);
uacpi_status uacpi_eval_simple_string(
uacpi_namespace_node *parent, const uacpi_char *path, uacpi_object **ret
);
/*
* A shorthand for uacpi_eval_typed with UACPI_OBJECT_BUFFER_BIT.
*
* Use uacpi_object_get_buffer to retrieve the resulting buffer data.
*/
uacpi_status uacpi_eval_buffer(
uacpi_namespace_node *parent, const uacpi_char *path,
const uacpi_object_array *args, uacpi_object **ret
);
uacpi_status uacpi_eval_simple_buffer(
uacpi_namespace_node *parent, const uacpi_char *path, uacpi_object **ret
);
/*
* A shorthand for uacpi_eval_typed with UACPI_OBJECT_PACKAGE_BIT.
*
* Use uacpi_object_get_package to retrieve the resulting object array.
*/
uacpi_status uacpi_eval_package(
uacpi_namespace_node *parent, const uacpi_char *path,
const uacpi_object_array *args, uacpi_object **ret
);
uacpi_status uacpi_eval_simple_package(
uacpi_namespace_node *parent, const uacpi_char *path, uacpi_object **ret
);
/*
* Get the bitness of the currently loaded AML code according to the DSDT.
*
* Returns either 32 or 64.
*/
uacpi_status uacpi_get_aml_bitness(uacpi_u8 *out_bitness);
/*
* Helpers for entering & leaving ACPI mode. Note that ACPI mode is entered
* automatically during the call to uacpi_initialize().
*/
UACPI_ALWAYS_OK_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_enter_acpi_mode(void)
)
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_leave_acpi_mode(void)
)
/*
* Attempt to acquire the global lock for 'timeout' milliseconds.
* 0xFFFF implies infinite wait.
*
* On success, 'out_seq' is set to a unique sequence number for the current
* acquire transaction. This number is used for validation during release.
*/
uacpi_status uacpi_acquire_global_lock(uacpi_u16 timeout, uacpi_u32 *out_seq);
uacpi_status uacpi_release_global_lock(uacpi_u32 seq);
/*
* Reset the global uACPI state by freeing all internally allocated data
* structures & resetting any global variables. After this call, uACPI must be
* re-initialized from scratch to be used again.
*
* This is called by uACPI automatically if a fatal error occurs during a call
* to uacpi_initialize/uacpi_namespace_load etc. in order to prevent accidental
* use of partially uninitialized subsystems.
*/
void uacpi_state_reset(void);
#ifdef __cplusplus
}
#endif

View file

@ -0,0 +1,184 @@
#pragma once
#include <uacpi/status.h>
#include <uacpi/types.h>
#include <uacpi/namespace.h>
#ifdef __cplusplus
extern "C" {
#endif
/*
* Checks whether the device at 'node' matches any of the PNP ids provided in
* 'list' (terminated by a UACPI_NULL). This is done by first attempting to
* match the value returned from _HID and then the value(s) from _CID.
*
* Note that the presence of the device (_STA) is not verified here.
*/
uacpi_bool uacpi_device_matches_pnp_id(
uacpi_namespace_node *node,
const uacpi_char *const *list
);
/*
* Find all the devices in the namespace starting at 'parent' matching the
* specified 'hids' (terminated by a UACPI_NULL) against any value from _HID or
* _CID. Only devices reported as present via _STA are checked. Any matching
* devices are then passed to the 'cb'.
*/
uacpi_status uacpi_find_devices_at(
uacpi_namespace_node *parent,
const uacpi_char *const *hids,
uacpi_iteration_callback cb,
void *user
);
/*
* Same as uacpi_find_devices_at, except this starts at the root and only
* matches one hid.
*/
uacpi_status uacpi_find_devices(
const uacpi_char *hid,
uacpi_iteration_callback cb,
void *user
);
typedef enum uacpi_interrupt_model {
UACPI_INTERRUPT_MODEL_PIC = 0,
UACPI_INTERRUPT_MODEL_IOAPIC = 1,
UACPI_INTERRUPT_MODEL_IOSAPIC = 2,
} uacpi_interrupt_model;
uacpi_status uacpi_set_interrupt_model(uacpi_interrupt_model);
typedef struct uacpi_pci_routing_table_entry {
uacpi_u32 address;
uacpi_u32 index;
uacpi_namespace_node *source;
uacpi_u8 pin;
} uacpi_pci_routing_table_entry;
typedef struct uacpi_pci_routing_table {
uacpi_size num_entries;
uacpi_pci_routing_table_entry entries[];
} uacpi_pci_routing_table;
void uacpi_free_pci_routing_table(uacpi_pci_routing_table*);
uacpi_status uacpi_get_pci_routing_table(
uacpi_namespace_node *parent, uacpi_pci_routing_table **out_table
);
typedef struct uacpi_id_string {
// size of the string including the null byte
uacpi_u32 size;
uacpi_char *value;
} uacpi_id_string;
void uacpi_free_id_string(uacpi_id_string *id);
/*
* Evaluate a device's _HID method and get its value.
* The returned struture must be freed using uacpi_free_id_string.
*/
uacpi_status uacpi_eval_hid(uacpi_namespace_node*, uacpi_id_string **out_id);
typedef struct uacpi_pnp_id_list {
// number of 'ids' in the list
uacpi_u32 num_ids;
// size of the 'ids' list including the string lengths
uacpi_u32 size;
// list of PNP ids
uacpi_id_string ids[];
} uacpi_pnp_id_list;
void uacpi_free_pnp_id_list(uacpi_pnp_id_list *list);
/*
* Evaluate a device's _CID method and get its value.
* The returned structure must be freed using uacpi_free_pnp_id_list.
*/
uacpi_status uacpi_eval_cid(uacpi_namespace_node*, uacpi_pnp_id_list **out_list);
/*
* Evaluate a device's _STA method and get its value.
* If this method is not found, the value of 'flags' is set to all ones.
*/
uacpi_status uacpi_eval_sta(uacpi_namespace_node*, uacpi_u32 *flags);
/*
* Evaluate a device's _ADR method and get its value.
*/
uacpi_status uacpi_eval_adr(uacpi_namespace_node*, uacpi_u64 *out);
/*
* Evaluate a device's _CLS method and get its value.
* The format of returned string is BBSSPP where:
* BB => Base Class (e.g. 01 => Mass Storage)
* SS => Sub-Class (e.g. 06 => SATA)
* PP => Programming Interface (e.g. 01 => AHCI)
* The returned struture must be freed using uacpi_free_id_string.
*/
uacpi_status uacpi_eval_cls(uacpi_namespace_node*, uacpi_id_string **out_id);
/*
* Evaluate a device's _UID method and get its value.
* The returned struture must be freed using uacpi_free_id_string.
*/
uacpi_status uacpi_eval_uid(uacpi_namespace_node*, uacpi_id_string **out_uid);
// uacpi_namespace_node_info->flags
#define UACPI_NS_NODE_INFO_HAS_ADR (1 << 0)
#define UACPI_NS_NODE_INFO_HAS_HID (1 << 1)
#define UACPI_NS_NODE_INFO_HAS_UID (1 << 2)
#define UACPI_NS_NODE_INFO_HAS_CID (1 << 3)
#define UACPI_NS_NODE_INFO_HAS_CLS (1 << 4)
#define UACPI_NS_NODE_INFO_HAS_SXD (1 << 5)
#define UACPI_NS_NODE_INFO_HAS_SXW (1 << 6)
typedef struct uacpi_namespace_node_info {
// Size of the entire structure
uacpi_u32 size;
// Object information
uacpi_object_name name;
uacpi_object_type type;
uacpi_u8 num_params;
// UACPI_NS_NODE_INFO_HAS_*
uacpi_u8 flags;
/*
* A mapping of [S1..S4] to the shallowest D state supported by the device
* in that S state.
*/
uacpi_u8 sxd[4];
/*
* A mapping of [S0..S4] to the deepest D state supported by the device
* in that S state to be able to wake itself.
*/
uacpi_u8 sxw[5];
uacpi_u64 adr;
uacpi_id_string hid;
uacpi_id_string uid;
uacpi_id_string cls;
uacpi_pnp_id_list cid;
} uacpi_namespace_node_info;
void uacpi_free_namespace_node_info(uacpi_namespace_node_info*);
/*
* Retrieve information about a namespace node. This includes the attached
* object's type, name, number of parameters (if it's a method), the result of
* evaluating _ADR, _UID, _CLS, _HID, _CID, as well as _SxD and _SxW.
*
* The returned structure must be freed with uacpi_free_namespace_node_info.
*/
uacpi_status uacpi_get_namespace_node_info(
uacpi_namespace_node *node, uacpi_namespace_node_info **out_info
);
#ifdef __cplusplus
}
#endif

63
src/lib/io.c Normal file
View file

@ -0,0 +1,63 @@
#include <stdint.h>
uint64_t rdmsr(uint64_t msr){
uint32_t low, high;
asm volatile (
"rdmsr"
: "=a"(low), "=d"(high)
: "c"(msr)
);
return ((uint64_t)high << 32) | low;
}
void wrmsr(uint64_t msr, uint64_t value){
uint32_t low = value & 0xFFFFFFFF;
uint32_t high = value >> 32;
asm volatile (
"wrmsr"
:
: "c"(msr), "a"(low), "d"(high)
);
}
void outb(uint16_t port, uint8_t val){
asm volatile ( "outb %0, %1" : : "a"(val), "Nd"(port) : "memory");
}
void outw(uint16_t port, uint16_t val){
asm volatile ( "outw %0, %1" : : "a"(val), "Nd"(port) : "memory");
}
void outl(uint16_t port, uint32_t val){
asm volatile ( "outl %0, %1" : : "a"(val), "Nd"(port) : "memory");
}
uint8_t inb(uint16_t port){
uint8_t ret;
asm volatile ( "inb %1, %0"
: "=a"(ret)
: "Nd"(port)
: "memory");
return ret;
}
uint16_t inw(uint16_t port){
uint16_t ret;
asm volatile ( "inw %1, %0"
: "=a"(ret)
: "Nd"(port)
: "memory");
return ret;
}
uint32_t inl(uint16_t port){
uint32_t ret;
asm volatile ( "inl %1, %0"
: "=a"(ret)
: "Nd"(port)
: "memory");
return ret;
}

15
src/lib/spinlock.c Normal file
View file

@ -0,0 +1,15 @@
#include <lock.h>
#include <stdatomic.h>
#include <stdio.h>
void acquire_lock(atomic_flag *lock){
while(atomic_flag_test_and_set_explicit(lock, memory_order_acquire)){
asm volatile("nop");
}
atomic_thread_fence(memory_order_acquire);
}
void free_lock(atomic_flag *lock){
atomic_flag_clear_explicit(lock, memory_order_release);
}

468
src/lib/stdio.c Normal file
View file

@ -0,0 +1,468 @@
#include <stdint.h>
#include <stdarg.h>
#include <stdbool.h>
#include <string.h>
#include <lock.h>
#include "../include/stdio.h"
#include "../drivers/serial.h"
#include "../hal/tsc.h"
#define FORMAT_LENGTH 1
#define NORMAL 0
#define STATE_SHORT 2
#define STATE_LONG 3
#define FORMAT_SPECIFIER 1
#define LENGTH_DEFAULT 0
#define LENGTH_SHORT_SHORT 1
#define LENGTH_SHORT 2
#define LENGTH_LONG 3
#define LENGTH_LONG_LONG 4
extern bool serial_enabled;
void klog(int level, const char *func, const char *msg){
switch (level) {
case LOG_INFO:
kprintf("[{d}] info: {s}: {sn}", tsc_get_timestamp(), func, msg);
if(serial_enabled){
serial_kprintf("{k}KLOG_INFO{k}: {s}: {sn}", ANSI_COLOR_MAGENTA, ANSI_COLOR_RESET, func, msg);
}
return;
case LOG_WARN:
kprintf("[{d}] {k}warning{k}: {s}: {sn}", tsc_get_timestamp(), ANSI_COLOR_YELLOW, ANSI_COLOR_RESET, func, msg);
if(serial_enabled){
serial_kprintf("{k}KLOG_WARN{k}: {s}: {sn}", ANSI_COLOR_YELLOW, ANSI_COLOR_RESET, func, msg);
}
return;
case LOG_ERROR:
kprintf("[{d}] {k}error{k}: {s}: {sn}", tsc_get_timestamp(), ANSI_COLOR_RED, ANSI_COLOR_RESET, func, msg);
if(serial_enabled){
serial_kprintf("{k}KLOG_ERROR{k}: {s}: {sn}", ANSI_COLOR_RED, ANSI_COLOR_RESET, func, msg);
}
return;
case LOG_SUCCESS:
kprintf("[{d}] {k}success{k}: {s}: {sn}", tsc_get_timestamp(), ANSI_COLOR_GREEN, ANSI_COLOR_RESET, func, msg);
if(serial_enabled){
serial_kprintf("{k}KLOG_SUCCESS{k}: {s}: {sn}", ANSI_COLOR_GREEN, ANSI_COLOR_RESET, func, msg);
}
return;
}
return;
}
atomic_flag printf_lock = ATOMIC_FLAG_INIT;
/*
printf()
params:
string
arguments
available format specifiers:
{i}, {d} - integer
{s} - string
{c} - char
{k} - color
{n} - newline (doesnt take in a argument)
{x} - base16
{b} - binary
*/
int kprintf(const char *format_string, ...){
extern struct flanterm_context *ft_ctx;
acquire_lock(&printf_lock);
int state = NORMAL;
va_list a_list;
va_start(a_list, format_string);
for(uint64_t i = 0; i < strlen(format_string); i++){
char current = format_string[i]; // current char in string
switch (state){
case NORMAL:
switch (current) {
case '{':
state = FORMAT_SPECIFIER;
break;
default:
print_char(ft_ctx, current);
break;
}
break;
case FORMAT_SPECIFIER:
switch (current) {
case 'n':
print_str(ft_ctx, "\n");
break;
case 'k':
print_str(ft_ctx, va_arg(a_list, char*));
break;
case 'd':
case 'i':
print_int(ft_ctx, va_arg(a_list, long long));
break;
case 's':
print_str(ft_ctx, va_arg(a_list, char*));
break;
case 'c':
;
int ch = va_arg(a_list, int);
print_char(ft_ctx, ch);
break;
case 'x':
print_hex(ft_ctx, va_arg(a_list, uint64_t));
break;
case 'b':
print_bin(ft_ctx, va_arg(a_list, uint64_t));
break;
case 'l':
current++;
switch (current) {
case 'd':
print_int(ft_ctx, va_arg(a_list, long long int));
break;
}
break;
case '}':
state = NORMAL;
break;
}
break;
}
}
va_end(a_list);
free_lock(&printf_lock);
return 0;
}
int serial_kprintf(const char *format_string, ...){
int state = NORMAL;
va_list a_list;
va_start(a_list, format_string);
for(uint64_t i = 0; i < strlen(format_string); i++){
char current = format_string[i]; // current char in string
switch (state){
case NORMAL:
switch (current) {
case '{':
state = FORMAT_SPECIFIER;
break;
default:
serial_print_char(current);
break;
}
break;
case FORMAT_SPECIFIER:
switch (current) {
case 'n':
serial_print("\n");
break;
case 'k':
serial_print(va_arg(a_list, char*));
break;
case 'd':
case 'i':
serial_print_int(va_arg(a_list, long long));
break;
case 's':
serial_print(va_arg(a_list, char*));
break;
case 'c':
;
int ch = va_arg(a_list, int);
serial_print_char(ch);
break;
case 'x':
serial_print_hex(va_arg(a_list, uint64_t));
break;
case 'b':
serial_print_bin(va_arg(a_list, uint64_t));
break;
case 'l':
current++;
switch (current) {
case 'd':
case 'i':
serial_print_int(va_arg(a_list, long long int));
break;
}
break;
case '}':
state = NORMAL;
break;
}
break;
}
}
va_end(a_list);
return 0;
}
#define MAX_INTERGER_SIZE 128
void print_char(struct flanterm_context *ft_ctx, char c){
kernel_framebuffer_print(&c, 1);
}
void serial_print_char(char c){
serial_write(c);
}
void print_str(struct flanterm_context *ft_ctx, char *str){
kernel_framebuffer_print(str, strlen(str));
}
void print_int(struct flanterm_context *ft_ctx, uint64_t num){
char buffer[MAX_INTERGER_SIZE] = {0};
if(num == 0){
buffer[0] = '0';
}
int arr[MAX_INTERGER_SIZE] = {0};
int j = 0;
while(num != 0){
int mod = num % 10;
arr[j] = dtoc(mod);
num /= 10;
j++;
if(j == MAX_INTERGER_SIZE){
return;
}
}
/* Reverse buffer */
for(int i = 0; i < j; i++){
buffer[i] = arr[j - i - 1];
}
kernel_framebuffer_print(buffer, strlen(buffer));
}
void print_hex(struct flanterm_context *ft_ctx, uint64_t num){
char buffer[MAX_INTERGER_SIZE] = {0};
if(num == 0){
buffer[0] = '0';
}
int arr[MAX_INTERGER_SIZE] = {0};
int j = 0;
while(num != 0){
int mod = num % 16;
arr[j] = dtoc(mod);
num /= 16;
j++;
if(j == MAX_INTERGER_SIZE){
return;
}
}
/* Reverse buffer */
for(int i = 0; i < j; i++){
buffer[i] = arr[j - i - 1];
}
kernel_framebuffer_print(buffer, strlen(buffer));
}
void print_bin(struct flanterm_context *ft_ctx, uint64_t num){
char buffer[MAX_INTERGER_SIZE] = {0};
int arr[MAX_INTERGER_SIZE] = {0};
int j = 0;
while(num != 0){
int mod = num % 2;
arr[j] = dtoc(mod);
num /= 2;
j++;
if(j == MAX_INTERGER_SIZE){
return;
}
}
/* Reverse buffer */
for(int i = 0; i < j; i++){
buffer[i] = arr[j - i - 1];
}
kernel_framebuffer_print(buffer, strlen(buffer));
}
void serial_print_int(uint64_t num){
char buffer[MAX_INTERGER_SIZE] = {0};
int arr[MAX_INTERGER_SIZE] = {0};
int j = 0;
while(num != 0){
int mod = num % 10;
arr[j] = dtoc(mod);
num /= 10;
j++;
if(j == MAX_INTERGER_SIZE){
return;
}
}
/* Reverse buffer */
for(int i = 0; i < j; i++){
buffer[i] = arr[j - i - 1];
}
kernel_serial_print(buffer, strlen(buffer));
}
void serial_print_hex(uint64_t num){
char buffer[MAX_INTERGER_SIZE] = {0};
int arr[MAX_INTERGER_SIZE] = {0};
int j = 0;
while(num != 0){
int mod = num % 16;
arr[j] = dtoc(mod);
num /= 16;
j++;
if(j == MAX_INTERGER_SIZE){
return;
}
}
/* Reverse buffer */
for(int i = 0; i < j; i++){
buffer[i] = arr[j - i - 1];
}
kernel_serial_print(buffer, strlen(buffer));
}
void serial_print_bin(uint64_t num){
char buffer[MAX_INTERGER_SIZE] = {0};
int arr[MAX_INTERGER_SIZE] = {0};
int j = 0;
while(num != 0){
int mod = num % 2;
arr[j] = dtoc(mod);
num /= 2;
j++;
if(j == MAX_INTERGER_SIZE){
return;
}
}
/* Reverse buffer */
for(int i = 0; i < j; i++){
buffer[i] = arr[j - i - 1];
}
kernel_serial_print(buffer, strlen(buffer));
}
char toupper(char c){
switch(c){
case 'a':
return 'A';
case 'b':
return 'B';
case 'c':
return 'C';
case 'd':
return 'D';
case 'e':
return 'E';
case 'f':
return 'F';
case 'g':
return 'G';
case 'h':
return 'H';
case 'i':
return 'I';
case 'j':
return 'J';
case 'k':
return 'K';
case 'l':
return 'L';
case 'm':
return 'M';
case 'n':
return 'N';
case 'o':
return 'O';
case 'p':
return 'P';
case 't':
return 'T';
case 'r':
return 'R';
case 's':
return 'S';
case 'u':
return 'U';
case 'v':
return 'V';
case 'w':
return 'W';
case 'x':
return 'X';
case 'y':
return 'Y';
case 'z':
return 'Z';
default:
return c;
}
}
atomic_flag fb_spinlock = ATOMIC_FLAG_INIT;
/* Eventually fix printf so that these print_* functions dont
write to the framebuffer but instead return to printf */
/* Prints a char array to the framebuffer, thread safe*/
void kernel_framebuffer_print(char *buffer, size_t n){
extern struct flanterm_context *ft_ctx;
//acquire_lock(&fb_spinlock);
flanterm_write(ft_ctx, buffer, n);
//free_lock(&fb_spinlock);
}
atomic_flag serial_spinlock = ATOMIC_FLAG_INIT;
/* Prints a char array to serial, thread safe*/
void kernel_serial_print(char *buffer, size_t n){
//acquire_lock(&serial_spinlock);
for(size_t i = 0; i < n; i++){
serial_print_char(buffer[i]);
}
//free_lock(&serial_spinlock);
}

82
src/lib/string.c Normal file
View file

@ -0,0 +1,82 @@
#include <stdint.h>
#include <stddef.h>
#include <stdio.h>
void *memset(void *dest, int c, uint64_t n){
uint8_t *p = (uint8_t *)dest;
for(uint64_t i = 0; i < n; i++){
p[i] = (uint8_t)c;
}
return dest;
}
void *memcpy(void *dest, const void *src, uint64_t n){
uint8_t *pdest = (uint8_t *)dest;
const uint8_t *psrc = (const uint8_t *)src;
for(uint64_t i = 0; i < n; i++){
pdest[i] = psrc[i];
}
return dest;
}
/* stolen from limine c template */
void *memmove(void *dest, const void *src, uint64_t n) {
uint8_t *pdest = (uint8_t *)dest;
const uint8_t *psrc = (const uint8_t *)src;
if(src > dest){
for (uint64_t i = 0; i < n; i++) {
pdest[i] = psrc[i];
}
}else if(src < dest){
for (uint64_t i = n; i > 0; i--) {
pdest[i-1] = psrc[i-1];
}
}
return dest;
}
int memcmp(const void *s1, const void *s2, uint64_t n){
const uint8_t *p1 = (const uint8_t *)s1;
const uint8_t *p2 = (const uint8_t *)s2;
for(uint64_t i = 0; i < n; i++){
if(p1[i] != p2[i]){
return p1[i] < p2[i] ? -1 : 1;
}
}
return 0;
}
uint64_t strlen(const char* str){
uint64_t i = 0;
while (str[i] != '\0'){
i++;
}
return i;
}
/* Converts a digit to a character */
char dtoc(int digit){
if(digit > 15){
return 0;
}else if(digit == 0){
return '0';
}
if(digit < 10){
return '0' + digit;
}else{
return 'A' + digit - 10;
}
}

136
src/main.c Normal file
View file

@ -0,0 +1,136 @@
#include <stdint.h>
#include <stddef.h>
#include <SFB25.h>
#include "limine.h"
#include "include/stdio.h"
#include "../flanterm/src/flanterm.h"
#include "flanterm/src/flanterm_backends/fb.h"
#include "hal/gdt.h"
#include "hal/idt.h"
#include "hal/apic.h"
#include "hal/timer.h"
#include "hal/smp.h"
#include "hal/tsc.h"
#include "mm/pmm.h"
#include "mm/vmm.h"
#include "mm/kmalloc.h"
#include "sys/acpi.h"
#include "sys/pci.h"
#include "drivers/serial.h"
#include "drivers/pmt.h"
#include "drivers/ahci.h"
#include "scheduler/sched.h"
static volatile struct limine_framebuffer_request framebuffer_request = {
.id = LIMINE_FRAMEBUFFER_REQUEST,
.revision = 0,
};
static volatile struct limine_hhdm_request hhdm_request = {
.id = LIMINE_HHDM_REQUEST,
.revision = 0,
};
struct flanterm_context *ft_ctx;
uint64_t hhdmoffset = 0;
void _start(void){
if(hhdm_request.response == NULL){
goto death;
}
hhdmoffset = hhdm_request.response->offset;
/* initalize framebuffer */
struct limine_framebuffer_response *fb_response = framebuffer_request.response;
if(fb_response == NULL){
goto death;
}
struct limine_framebuffer *fb = fb_response->framebuffers[0];
if(fb == NULL){
goto death;
}
ft_ctx = flanterm_fb_init(
NULL,
NULL,
fb->address, fb->width, fb->height, fb->pitch,
fb->red_mask_size, fb->red_mask_shift,
fb->green_mask_size, fb->green_mask_shift,
fb->blue_mask_size, fb->blue_mask_shift,
NULL,
NULL, NULL,
NULL, NULL,
NULL, NULL,
NULL, 0, 0, 1,
0, 0,
0
);
kprintf("Welcome to SFB/25{n}");
extern link_symbol_ptr text_start_addr, text_end_addr;
klog(LOG_INFO, "serial", "Initalizing serial controller");
serial_init();
klog(LOG_SUCCESS, "serial", "Done!");
klog(LOG_INFO, "gdt", "Setting up the GDT");
set_gdt();
klog(LOG_SUCCESS, "gdt", "Done!");
klog(LOG_INFO, "idt", "Setting up the IDT");
set_idt();
klog(LOG_SUCCESS, "idt", "Done!");;
klog(LOG_INFO, "acpi", "Reading ACPI tables");
acpi_init();
klog(LOG_SUCCESS, "acpi", "Done!");
klog(LOG_INFO, "apic", "Initalizing APIC");
apic_init();
klog(LOG_SUCCESS, "apic", "Done!");
tsc_init();
klog(LOG_INFO, "pmm", "Setting up the PMM");
pmm_init();
klog(LOG_SUCCESS, "pmm", "Done!");
klog(LOG_INFO, "vmm", "Setting up the page tables");
vmm_init();
klog(LOG_SUCCESS, "vmm", "Done!");
kernel_heap_init();
klog(LOG_INFO, "smp", "Starting APs");
smp_init();
klog(LOG_SUCCESS, "smp", "Done!");
klog(LOG_INFO, "pci", "Getting le pci");
pci_init();
klog(LOG_SUCCESS, "pci", "Done!");
scheduler_init();
/* klog(LOG_INFO, "ahci", "Initializing AHCI controller");
ahci_init();
klog(LOG_SUCCESS, "ahci", "Done!"); */
death:
for(;;);
}
bool kernel_killed = false;
void kkill(void){
kernel_killed = true;
asm volatile("cli; hlt");
for(;;);
}

167
src/mm/kmalloc.c Normal file
View file

@ -0,0 +1,167 @@
#include <stdbool.h>
#include <stdint.h>
#include <limine.h>
#include <stdio.h>
#include <SFB25.h>
#include <string.h>
#include <lock.h>
#include "pmm.h"
#include "vmm.h"
#include "kmalloc.h"
#define KERNEL_MAX_BLOCK 512
typedef struct block_t {
uint64_t addr;
uint32_t size;
bool free;
} block_t;
block_t *base = NULL;
uint64_t *heap_addr = NULL;
void kernel_heap_init(){
extern struct limine_memmap_response *memmap_response;
extern uint64_t pmm_page_count;
/* Allocate memory for the blocks*/
base = kernel_allocate_memory(KERNEL_MAX_BLOCK * sizeof(block_t), PTE_BIT_RW | PTE_BIT_NX);
if(!base){
klog(LOG_ERROR, __func__, "Failed to allocate memory for kernel heap blocks");
kkill();
}
memset(base, 0, KERNEL_MAX_BLOCK * sizeof(block_t));
/* Allocate memory for the heap */
heap_addr = kernel_allocate_memory(KERNEL_HEAP_SIZE, PTE_BIT_RW | PTE_BIT_NX);
if(!heap_addr){
klog(LOG_ERROR, __func__, "Failed to allocate memory for the kernel heap");
kkill();
}
base->free = true;
}
void *kmalloc(uint64_t size){
/* First check if there is a free block which fits the size requirement */
for(int i = 0; i < KERNEL_MAX_BLOCK; i++){
if(base[i].addr && base[i].free && base[i].size >= size){
base[i].free = false;
return (void*)base[i].addr;
}
}
int i = 0;
/* Parse the list until you find the next free block */
while (!base[i].free && !base[i].addr){
if(i > KERNEL_MAX_BLOCK){
/* Over max block limit */
return NULL;
}
i++;
}
/* Fill this block in */
uint64_t addr = (uint64_t)heap_addr;
/* Calculate address offset */
for(int j = 0; j < i; j++){
if(addr > ((uint64_t)heap_addr + KERNEL_HEAP_SIZE)){
/* Out of heap memory */
return NULL;
}
addr += base[j].size;
}
memset((uint64_t*)addr, 0, size);
base[i].addr = addr;
base[i].free = false;
base[i].size = size;
return (void*)addr;
}
void kfree(void *addr){
for(int i = 0; i < KERNEL_MAX_BLOCK; i++){
if(base[i].addr == (uint64_t)addr){
base[i].free = true;
memset((void*)base[i].addr, 0, base[i].size);
return;
}
}
kprintf("kfree: attempted to free non-heap address!\n");
kkill();
}
void *krealloc(void *addr, uint64_t size) {
if (addr == NULL) {
return kmalloc(size);
}
if (size == 0) {
kfree(addr);
return NULL;
}
// Find the block corresponding to the pointer
int i;
for (i = 0; i < KERNEL_MAX_BLOCK; i++) {
if (base[i].addr == (uint64_t)addr) {
break;
}
}
if (i == KERNEL_MAX_BLOCK) {
kprintf("krealloc: attempted to realloc non-heap address!\n");
kkill();
}
block_t *block = &base[i];
uint64_t old_size = block->size;
// If the current size is already sufficient, return the same pointer
if (old_size >= size) {
return addr;
}
// Check if this block is the last allocated block in the array
bool is_last = true;
for (int j = i + 1; j < KERNEL_MAX_BLOCK; j++) {
if (base[j].addr != 0) {
is_last = false;
break;
}
}
// If it's the last block, check if there's enough space to expand
if (is_last) {
uint64_t current_end = block->addr + block->size;
uint64_t heap_end = (uint64_t)heap_addr + KERNEL_HEAP_SIZE;
uint64_t available = heap_end - current_end;
if (available >= (size - old_size)) {
// Expand the block in place
block->size = size;
return addr;
}
}
// Allocate a new block, copy data, and free the old block
void *new_ptr = kmalloc(size);
if (!new_ptr) {
return NULL; // Allocation failed
}
memcpy(new_ptr, addr, old_size);
kfree(addr);
return new_ptr;
}

10
src/mm/kmalloc.h Normal file
View file

@ -0,0 +1,10 @@
#include <stdint.h>
void kernel_heap_init();
void heap_free(uint64_t *addr);
uint64_t *heap_alloc();
void *kmalloc(uint64_t size);
void kfree(void *addr);
#define KERNEL_HEAP_SIZE 0x10000000

110
src/mm/pmm.c Normal file
View file

@ -0,0 +1,110 @@
#include <limine.h>
#include <stdio.h>
#include <SFB25.h>
#include <string.h>
#include <lock.h>
#include "pmm.h"
#include "kmalloc.h"
static volatile struct limine_memmap_request memmap_request = {
.id = LIMINE_MEMMAP_REQUEST,
.revision = 0,
};
extern uint64_t hhdmoffset;
uint64_t pmm_free_page_count = 0;
uint64_t pmm_page_count = 0;
uint64_t mem_size = 0;
struct limine_memmap_response *memmap_response;
/* Freelist implementation */
uint64_t *free_list = NULL;
atomic_flag pmm_lock = ATOMIC_FLAG_INIT;
void pmm_free(uint64_t *addr){
acquire_lock(&pmm_lock);
uint64_t *virt_addr = (uint64_t*)((uint64_t)addr+hhdmoffset);
/* Make the given page point to the previous free page */
*virt_addr = (uint64_t)free_list;
/* Make the free_list point to the newly freed page */
free_list = virt_addr;
pmm_free_page_count++;
free_lock(&pmm_lock);
return;
}
uint64_t *pmm_alloc(){
acquire_lock(&pmm_lock);
if(pmm_free_page_count <= 0){
return NULL;
}
/* Fetch the address of the free page in free_list and make it point to the next free page */
uint64_t *addr = (uint64_t*)((uint64_t)free_list - hhdmoffset);
free_list = (uint64_t*)(*free_list);
pmm_free_page_count--;
free_lock(&pmm_lock);
return addr;
}
void pmm_init(){
if(memmap_request.response == NULL){
klog(LOG_ERROR, __func__, "Memmap response is null");
kkill();
}
memmap_response = memmap_request.response;
struct limine_memmap_entry **entries = memmap_response->entries;
for(uint64_t i = 0; i < memmap_response->entry_count; i++){
switch (entries[i]->type) {
case LIMINE_MEMMAP_USABLE:
//kprintf("usable: base: 0x{x}, length: 0x{xn}", entries[i]->base, entries[i]->length);
mem_size += entries[i]->length;
break;
default:
;
//kprintf("base: 0x{x}, length: 0x{xn}", entries[i]->base, entries[i]->length);
}
}
kprintf("pmm: got a total of {d}MB of memory\n", mem_size / 1048576);
bool first_entry = true;
uint64_t j;
uint64_t i;
/* Dogshit fix this */
for(i = 0; i < memmap_response->entry_count; i++){
switch (entries[i]->type) {
case LIMINE_MEMMAP_USABLE:
/* First set the first entry if it isn't set already */
if(first_entry == true){
first_entry = false;
free_list = (uint64_t*)(entries[i]->base + hhdmoffset);
j = 1;
}else{
j = 0;
}
for(; j < (entries[i]->length / BLOCK_SIZE); j++){
pmm_free((uint64_t*)(entries[i]->base + j*BLOCK_SIZE));
pmm_page_count++;
}
}
}
}

14
src/mm/pmm.h Normal file
View file

@ -0,0 +1,14 @@
#include <stdbool.h>
#include <stdint.h>
#define BLOCK_SIZE 4096
typedef struct free_page_t {
struct free_page_t *next;
uint8_t _padding[4088];
} __attribute((packed)) free_page_t;
void pmm_init(void);
uint64_t *pmm_alloc();
void pmm_free(uint64_t *addr);

306
src/mm/vmm.c Normal file
View file

@ -0,0 +1,306 @@
#include <lock.h>
#include <stdint.h>
#include <stdio.h>
#include <SFB25.h>
#include <string.h>
#include <limine.h>
#include "pmm.h"
#include "vmm.h"
#include "../sys/acpi.h"
#include "../hal/apic.h"
struct limine_kernel_address_request kernel_addr_request = {
.id = LIMINE_KERNEL_ADDRESS_REQUEST,
.revision = 0
};
struct limine_kernel_address_response *kernel_address;
extern uint64_t hhdmoffset;
uint64_t *kernel_page_map = 0;
uint64_t kernel_virt = 0;
uint64_t text_start, text_end, rodata_start, rodata_end, data_start, data_end;
uint64_t kernel_start, kernel_end;
void vmm_set_ctx(uint64_t *page_map){
__asm__ volatile (
"movq %0, %%cr3\n"
: : "r" ((uint64_t)((uint64_t)(page_map) - hhdmoffset)) : "memory"
);
}
void vmm_init(){
struct limine_kernel_address_response *kernel_address = kernel_addr_request.response;
if(!kernel_address){
klog(LOG_ERROR, __func__, "Kernel address not recieved");
}
kernel_page_map = (uint64_t*)((uint64_t)pmm_alloc() + hhdmoffset);
if(!kernel_page_map){
klog(LOG_ERROR, __func__, "Allocating block for page map failed");
}
memset(kernel_page_map, 0, PAGE_SIZE);
// map kernel, stolen
extern link_symbol_ptr text_start_addr, text_end_addr,
rodata_start_addr, rodata_end_addr,
data_start_addr, data_end_addr;
text_start = ALIGN_DOWN((uint64_t)text_start_addr, PAGE_SIZE),
rodata_start = ALIGN_DOWN((uint64_t)rodata_start_addr, PAGE_SIZE),
data_start = ALIGN_DOWN((uint64_t)data_start_addr, PAGE_SIZE),
text_end = ALIGN_UP((uint64_t)text_end_addr, PAGE_SIZE),
rodata_end = ALIGN_UP((uint64_t)rodata_end_addr, PAGE_SIZE),
data_end = ALIGN_UP((uint64_t)data_end_addr, PAGE_SIZE);
// map usable entries, framebuffer and bootloader reclaimable shit
extern struct limine_memmap_response *memmap_response;
for(uint64_t i = 0; i < memmap_response->entry_count; i++){
if(memmap_response->entries[i]->type == LIMINE_MEMMAP_USABLE){
for(uint64_t j = 0; j < memmap_response->entries[i]->length; j+=PAGE_SIZE){
vmm_map_page(kernel_page_map, memmap_response->entries[i]->base+j+hhdmoffset, memmap_response->entries[i]->base+j, PTE_BIT_PRESENT | PTE_BIT_RW);
}
}
if(memmap_response->entries[i]->type == LIMINE_MEMMAP_FRAMEBUFFER){
for(uint64_t j = 0; j < memmap_response->entries[i]->length; j+=PAGE_SIZE){
vmm_map_page(kernel_page_map, memmap_response->entries[i]->base+j+hhdmoffset, memmap_response->entries[i]->base+j, PTE_BIT_PRESENT | PTE_BIT_RW | PTE_BIT_NX);
}
}
if(memmap_response->entries[i]->type == LIMINE_MEMMAP_BOOTLOADER_RECLAIMABLE){
for(uint64_t j = 0; j < memmap_response->entries[i]->length; j+=PAGE_SIZE){
vmm_map_page(kernel_page_map, memmap_response->entries[i]->base+j+hhdmoffset, memmap_response->entries[i]->base+j, PTE_BIT_PRESENT | PTE_BIT_RW);
}
}
if(memmap_response->entries[i]->type == LIMINE_MEMMAP_ACPI_RECLAIMABLE){
for(uint64_t j = 0; j < memmap_response->entries[i]->length; j+=PAGE_SIZE){
vmm_map_page(kernel_page_map, memmap_response->entries[i]->base+j+hhdmoffset, memmap_response->entries[i]->base+j, PTE_BIT_PRESENT | PTE_BIT_RW);
}
}
}
for (uintptr_t text_addr = text_start; text_addr < text_end; text_addr += PAGE_SIZE) {
uintptr_t phys = text_addr - kernel_address->virtual_base + kernel_address->physical_base;
vmm_map_page(kernel_page_map, text_addr, phys, PTE_BIT_PRESENT);
}
/* Kernel starts with the text section */
kernel_start = text_start;
kprintf("vmm: text_start: 0x{xn}vmm: text_end: 0x{xn}", text_start, text_end);
for (uintptr_t rodata_addr = rodata_start; rodata_addr < rodata_end; rodata_addr += PAGE_SIZE) {
uintptr_t phys = rodata_addr - kernel_address->virtual_base + kernel_address->physical_base;
vmm_map_page(kernel_page_map, rodata_addr, phys, PTE_BIT_PRESENT | PTE_BIT_NX);
}
kprintf("vmm: rodata_start: 0x{xn}vmm: rodata_end: 0x{xn}", rodata_start, rodata_end);
for (uintptr_t data_addr = data_start; data_addr < data_end; data_addr += PAGE_SIZE) {
uintptr_t phys = data_addr - kernel_address->virtual_base + kernel_address->physical_base;
vmm_map_page(kernel_page_map, data_addr, phys, PTE_BIT_PRESENT | PTE_BIT_RW | PTE_BIT_NX);
}
kprintf("vmm: data_start: 0x{xn}vmm: data_end: 0x{xn}", data_start, data_end);
/* Kernel ends with the data section */
kernel_end = data_end;
extern uint64_t lapic_address;
/* Map the APIC */
vmm_map_page(kernel_page_map, lapic_address, lapic_address - hhdmoffset, PTE_BIT_PRESENT | PTE_BIT_RW | PTE_BIT_NX);
/* Map the ACPI tables */
extern xsdt_t *xsdt;
extern rsdt_t *rsdt;
if(!xsdt && rsdt){
/* Map the amount of pages occupied by the RSDT + 1, because even if it doesn't
fill an entire page it still requires a page */
for(uint64_t i = 0; i < rsdt->header.length / PAGE_SIZE + 1; i++){
kprintf("mapping 0x{xn}", (uint64_t)rsdt + i * PAGE_SIZE);
vmm_map_page(kernel_page_map, (uint64_t)rsdt + i * PAGE_SIZE, ((uint64_t)rsdt - hhdmoffset) + i * PAGE_SIZE, PTE_BIT_PRESENT | PTE_BIT_RW | PTE_BIT_NX);
}
}else{
for(uint64_t i = 0; i < xsdt->header.length / PAGE_SIZE + 1; i++){
kprintf("mapping 0x{xn}", (uint64_t)xsdt + i * PAGE_SIZE);
vmm_map_page(kernel_page_map, (uint64_t)xsdt + i * PAGE_SIZE, ((uint64_t)xsdt - hhdmoffset) + i * PAGE_SIZE, PTE_BIT_PRESENT | PTE_BIT_RW | PTE_BIT_NX);
}
}
vmm_set_ctx(kernel_page_map);
asm volatile(
"movq %%cr3, %%rax\n\
movq %%rax, %%cr3\n"
: : : "rax"
);
}
uint64_t *get_lower_table(uint64_t *page_map, uint64_t offset){
if((page_map[offset] & PTE_BIT_PRESENT) != 0){
return (uint64_t*)( ((uint64_t)page_map[offset] & 0x000ffffffffff000) + hhdmoffset);
}
uint64_t *ret = pmm_alloc();
if(!ret){
klog(LOG_ERROR, __func__, "Failed to allocate page table");
kprintf("page_map: 0x{xn}", (uint64_t)page_map);
kprintf("offset: 0x{xn}", offset);
return NULL;
}
memset((uint64_t*)((uint64_t)ret + hhdmoffset), 0, PAGE_SIZE);
page_map[offset] = (uint64_t)ret | PTE_BIT_PRESENT | PTE_BIT_RW | PTE_BIT_US;
return (uint64_t*)((uint64_t)ret + hhdmoffset);
}
atomic_flag page_table_lock = ATOMIC_FLAG_INIT;
void vmm_map_page(uint64_t *page_map, uint64_t virt_addr, uint64_t phys_addr, uint64_t flags){
/* Probably slow, fix in future */
acquire_lock(&page_table_lock);
uint64_t pml4_offset = (virt_addr >> 39) & 0x1ff;
uint64_t pdp_offset = (virt_addr >> 30) & 0x1ff;
uint64_t pd_offset = (virt_addr >> 21) & 0x1ff;
uint64_t pt_offset = (virt_addr >> 12) & 0x1ff;
uint64_t *pdp = get_lower_table(page_map, pml4_offset);
if(!pdp){
klog(LOG_ERROR, __func__, "Failed to allocate PDP");
kkill();
}
uint64_t *pd = get_lower_table(pdp, pdp_offset);
if(!pd){
klog(LOG_ERROR, __func__, "Failed to allocate PD");
kkill();
}
uint64_t *pt = get_lower_table(pd, pd_offset);
if(!pt){
klog(LOG_ERROR, __func__, "Failed to allocate PT");
kkill();
}
pt[pt_offset] = phys_addr | flags;
asm volatile(
"movq %%cr3, %%rax\n\
movq %%rax, %%cr3\n"
: : : "rax"
);
free_lock(&page_table_lock);
}
void vmm_free_page(uint64_t *page_map, uint64_t virt_addr){
uint64_t pml4_offset = (virt_addr >> 39) & 0x1ff;
uint64_t pdp_offset = (virt_addr >> 30) & 0x1ff;
uint64_t pd_offset = (virt_addr >> 21) & 0x1ff;
uint64_t pt_offset = (virt_addr >> 12) & 0x1ff;
uint64_t *pdp = get_lower_table(page_map, pml4_offset);
if(!pdp){
klog(LOG_ERROR, __func__, "Failed to allocate PDP");
kkill();
}
uint64_t *pd = get_lower_table(pdp, pdp_offset);
if(!pd){
klog(LOG_ERROR, __func__, "Failed to allocate PD");
kkill();
}
uint64_t *pt = get_lower_table(pd, pd_offset);
if(!pt){
klog(LOG_ERROR, __func__, "Failed to allocate PT");
kkill();
}
/* Free the page at the physical address pointed by the pt entry */
pmm_free((uint64_t*)(pt[pt_offset] & 0x000ffffffffff000));
/* Set it to zero (mark as not present) */
pt[pt_offset] = 0;
asm volatile(
"movq %%cr3, %%rax\n\
movq %%rax, %%cr3\n"
: : : "rax"
);
}
/* Maps `size` number of free pages at the specified virtual address */
int vmm_map_continous_pages(uint64_t *page_map, uint64_t virt_addr, uint64_t phys_addr, uint64_t size, uint64_t flags){
for(uint64_t i = 0; i < size; i++){
vmm_map_page(page_map, virt_addr + i * PAGE_SIZE, (uint64_t)phys_addr, flags);
}
return 0;
}
/* Allocates and maps memory into the kernel address space */
void *kernel_allocate_memory(uint64_t size, uint64_t flags){
if(size == 0){
return NULL;
}
void *ret = NULL;
for(uint64_t i = 0; i < size; i += PAGE_SIZE){
ret = pmm_alloc();
if(!ret){
return NULL;
}
vmm_map_page(kernel_page_map, (uint64_t)ret + hhdmoffset, (uint64_t)ret, PTE_BIT_PRESENT | flags);
}
return (void*)((uint64_t)ret + hhdmoffset);
}
/* Maps pages from phys_addr to phys_addr+size into the kernels address space */
void kernel_map_pages(void *phys_addr, uint64_t size, uint64_t flags){
for(uint64_t i = 0; i < size; i++){
vmm_map_page(kernel_page_map, (uint64_t)phys_addr + hhdmoffset + (i * PAGE_SIZE), (uint64_t)phys_addr + (i * PAGE_SIZE), PTE_BIT_PRESENT | flags);
}
}
void kernel_unmap_pages(void *addr, uint64_t size){
for(uint64_t i = 0; i < size; i++){
vmm_free_page(kernel_page_map, (uint64_t)addr + i*PAGE_SIZE);
}
}

23
src/mm/vmm.h Normal file
View file

@ -0,0 +1,23 @@
#include <stdint.h>
#define PTE_BIT_PRESENT 0x1 // Present bit
#define PTE_BIT_RW 0x2 // Read/write bit
#define PTE_BIT_US 0x4 // User and Supervisor bit
#define PTE_BIT_NX 0x4000000000000000 // Non-executable bit
#define PTE_BIT_UNCACHABLE (1 << 4)
#define PAGE_SIZE 4096
void tlb_flush(void);
void vmm_map_page(uint64_t *page_map, uint64_t virt_address, uint64_t phys_address, uint64_t flags);
int vmm_map_continous_pages(uint64_t *page_map, uint64_t virt_addr, uint64_t phys_addr, uint64_t size, uint64_t flags);
void vmm_free_page(uint64_t *page_map, uint64_t virt_addr);
void vmm_init();
void vmm_set_ctx(uint64_t *page_map);
void *kernel_allocate_memory(uint64_t size, uint64_t flags);
void kernel_map_pages(void *phys_addr, uint64_t size, uint64_t flags);
void kernel_unmap_pages(void *addr, uint64_t size);
typedef char link_symbol_ptr[];

47
src/scheduler/sched.asm Normal file
View file

@ -0,0 +1,47 @@
default rel
global switch_context
%macro save_context 0
push rbx
push rsp
push rbp
push r12
push r13
push r14
push r15
%endmacro
%macro load_context 0
pop r15
pop r14
pop r13
pop r12
pop rbp
pop rsp
pop rbx
%endmacro
; Switch context from old to new
; void switch_context(context *old, context* new);
; rdi rsi
switch_context:
save_context
mov rdi, rsp
mov rsp, rsi
load_context
retfq

123
src/scheduler/sched.c Normal file
View file

@ -0,0 +1,123 @@
#include <stdio.h>
#include <string.h>
#include <SFB25.h>
#include "../hal/smp.h"
#include <error.h>
#include "../mm/kmalloc.h"
#include "sched.h"
extern context *save_context();
extern void switch_context(context *old, context *new);
#define QUANTUM_US 10000
int next_pid = 1;
void idle_task(){
kprintf("Hello world from bruhd task!\n");
for(;;);
}
void test_task(){
kprintf("Hello world from scheduled task!\n");
return;
}
/* Setup a process structure */
proc *alloc_process(void){
asm("cli");
cpu_state *state = get_cpu_struct();
kprintf("hi10\n");
proc *proc_list = state->process_list;
uint8_t *sp;
for(uint64_t i = 0; i < PROC_MAX; i++){
if(proc_list[i].state == UNUSED){
/* Set the process ready to be executed */
kprintf("hi6\n");
proc_list[i].state = READY;
proc_list[i].kstack = kmalloc(INITIAL_STACK_SIZE);
if(proc_list[i].kstack == NULL){
klog(LOG_ERROR, __func__, "Failed to alloc stack");
}
kprintf("hi7\n");
proc_list[i].pid = next_pid++;
sp = (uint8_t*)((uint64_t)proc_list[i].kstack + INITIAL_STACK_SIZE);
proc_list[i].context.rip = 0;
proc_list[i].context.rsp = (uint64_t)sp;
kprintf("hi8\n");
asm("sti");
return &proc_list[i];
}
}
asm("sti");
return NULL;
}
kstatus add_task(uint64_t *entry){
proc *proc = alloc_process();
if (proc == NULL) {
klog(LOG_ERROR, __func__, "proc == null!");
kkill();
}
proc->context.rip = (uint64_t)entry;
return KERNEL_STATUS_SUCCESS;
}
void scheduler_init(){
cpu_state *state = get_cpu_struct();
if(state->current_process != NULL){
kprintf("sched: scheduler on CPU {d} already initialized!\n", state->lapic_id);
kkill();
}
kprintf("hi1\n");
proc *proc_list = state->process_list;
/* Put the idle task */
proc idle = {0, 0, 0, READY, {0, 0, 0, 0, 0, 0, 0, 0, 0}};
/* Make the idle structure the firstr process */
proc_list[0] = idle;
kprintf("hi2\n");
add_task((uint64_t*)test_task);
kprintf("hi5\n");
for(;;){
for(int i = 0; i < PROC_MAX; i++){
if(proc_list[i].state == READY){
context old_state = state->current_process->context;
state->current_process = &proc_list[i];
state->current_process->state = RUNNING;
switch_context(&old_state, &state->current_process->context);
}
}
}
}
void scheduler_tick(){
cpu_state *state = get_cpu_struct();
proc *proc_list = state->process_list;
}

30
src/scheduler/sched.h Normal file
View file

@ -0,0 +1,30 @@
#include <stdint.h>
#pragma once
typedef enum proc_state {
RUNNING,
READY,
SLEEPING,
UNUSED = 0
}proc_state;
typedef struct context {
uint64_t rbx, rsp, rbp, r12, r13, r14, r15;
uint64_t rip, rflags;
} __attribute((packed))context;
typedef struct proc {
uint64_t *mem;
uint64_t *kstack;
proc_state state;
uint16_t pid;
context context;
}proc;
void scheduler_init();
#define PROC_MAX 512 // Max number of processes
#define INITIAL_STACK_SIZE 0x10000

121
src/sys/acpi.c Normal file
View file

@ -0,0 +1,121 @@
#include <limine.h>
#include <stddef.h>
#include <stdio.h>
#include <SFB25.h>
#include <string.h>
#include <stdalign.h>
#include "acpi.h"
static volatile struct limine_rsdp_request rsdp_request = {
.id = LIMINE_RSDP_REQUEST,
.revision = 0,
};
extern uint64_t hhdmoffset;
xsdt_t *xsdt;
rsdt_t *rsdt;
madt_t *madt;
/* Returns pointer to the table with specified signature, if it doesnt find it then it returns NULL */
uint64_t *find_acpi_table(char *signature){
uint64_t entries = 0; // stores the total number of entries in the table
if(xsdt){
/* The total number of entries is the length of the entire table minus the standard header from the xsdt and divided by 8, as the array is of 8 byte wide headers*/
entries = (xsdt->header.length - sizeof(desc_header_t)) / 8;
}else{
entries = (rsdt->header.length - sizeof(desc_header_t)) / 4;
}
desc_header_t *header;
for(uint64_t i = 0; i < entries; i++){
if(xsdt){
header = (desc_header_t*)(xsdt->entries_base[i]);
}else{
header = (desc_header_t*)(rsdt->entries_base[i]);
}
/* Get the virtual address of the header so we can access its signature */
desc_header_t *virt = (desc_header_t*)((uint64_t)header + hhdmoffset);
if(memcmp(virt->signature, signature, 4) == 0){
return (uint64_t*)header;
}
}
return NULL;
}
void acpi_init(void){
if(rsdp_request.response == NULL){
klog(LOG_ERROR, "acpi", "RSDP request is NULL");
kkill();
}
rsdp_t *rsdp = (rsdp_t*)(rsdp_request.response->address);
kprintf("RSDP address: 0x{xn}", (uint64_t)(rsdp));
/* If the systems ACPI revision is higher/equal than 2, then use XSDT */
if(rsdp->revision >= 2){
rsdt = NULL;
xsdt = (xsdt_t*)(rsdp->xsdt_address + hhdmoffset);
klog(LOG_INFO, "acpi", "Using XSDT header");
kprintf("XSDT address: 0x{xn}", (uint64_t)xsdt);
kprintf("OEMID: {ccccccn}", xsdt->header.oemid[0], xsdt->header.oemid[1], xsdt->header.oemid[2], xsdt->header.oemid[3], xsdt->header.oemid[4], xsdt->header.oemid[5]);
}else{
xsdt = NULL;
rsdt = (rsdt_t*)(rsdp->rsdt_address + hhdmoffset);
klog(LOG_INFO, "acpi", "Using RSDT header");
kprintf("RSDT address: 0x{xn}", (uint64_t)rsdt);
kprintf("OEMID: {ccccccn}", rsdt->header.oemid[0], rsdt->header.oemid[1], rsdt->header.oemid[2], rsdt->header.oemid[3], rsdt->header.oemid[4], rsdt->header.oemid[5]);
}
madt = (madt_t*)find_acpi_table("APIC");
if(!madt){
klog(LOG_ERROR, __func__, "MADT table not found");
kkill();
}
}
uint64_t *find_ics(uint64_t type){
uint64_t length = (madt->header.length - sizeof(desc_header_t) - 8);
uint64_t *base_addr = (uint64_t*)madt->ics;
uint64_t i = 0;
while (i < length) {
ics_t *header = (ics_t*)((uint64_t)base_addr + i);
if(header->type == type){
return (uint64_t*)header;
}
i += header->length;
}
return NULL;
}
uint32_t find_iso(uint8_t legacy){
uint64_t length = (madt->header.length - sizeof(desc_header_t) - 8);
uint64_t *base_addr = (uint64_t*)madt->ics;
uint64_t i = 0;
while (i < length) {
ics_t *header = (ics_t*)((uint64_t)base_addr + i);
if(header->type == 0x2){
iso_t *iso = (iso_t*)header;
if(legacy == iso->source){
return iso->gsi;
}
}
i += header->length;
}
/* GSI is equal to legacy pin */
return legacy;
}

Some files were not shown because too many files have changed in this diff Show more