Initial commit

Commit
This commit is contained in:
Ralf Zerres
2025-05-23 16:09:48 +02:00
parent 8335756109
commit aa80242275
75 changed files with 66335 additions and 2 deletions

20
.gitignore vendored Normal file
View File

@@ -0,0 +1,20 @@
// temporary stuff
*~
.idea
tmp
// Apple temporaries
.DS_Store
Thumbs.db
// Generated by Cargo
Cargo.lock
target
// this will be auto-genearated with cargo-readme
//**/README.md
//README.md
// the render directories
book
doc

11
.woodpecker/.build.yml Normal file
View File

@@ -0,0 +1,11 @@
pipeline:
build:
image: debian:stable-slim
commands:
- echo "building german variant from mdBook sources"
- mdbook build --dest-dir doc/de
- ls -l doc/de/*
- cat doc/de/CNAME
depends_on:
- cargo

22
.woodpecker/.cargo.yml Normal file
View File

@@ -0,0 +1,22 @@
workspace:
base: /rust
path: build/
matrix:
#RUST: [ stable, beta, nightly ]
RUST: stable
pipeline:
test:
image: rust
environment: [ CARGO_TERM_COLOR=always ]
when:
branch:
include: [ main, develop ]
platform: linux/amd64
commands:
- echo "deloy the rust toolchain"
- rustup default $RUST
- echo "install the multilingual version of mdBook"
- cargo install mdbook
- cargo check

8
.woodpecker/.deploy.yml Normal file
View File

@@ -0,0 +1,8 @@
pipeline:
deploy:
image: debian:stable-slim
commands:
- echo deploying
depends_on:
- build

66
Cargo.toml Normal file
View File

@@ -0,0 +1,66 @@
[package]
name = "WRI IT-Konzept"
version = "0.0.1"
authors = [
"Ralf Zerres <ralf.zerres@networkx.de>"
]
description = "Documentation des Konzepts zur IT-Infrastructure bei WERNER RI."
edition = "2021"
[dependencies]
docopt = "1.1.0"
flate2 = "1.0.13"
lazy_static = "1.4.0"
regex = "1.3.3"
serde = "1.0"
tar = "0.4.26"
walkdir = "2.3.1"
[build-dependencies]
cargo-readme = "3.2.0"
mdbook = "~0.4.28"
#mdbook = { git = "https://github.com/Ruin0x11/mdbook", branch = "localization", version= "0.4.15" }
#mdbook-extended-markdown-table = "^0.1.0"
#mdbook-latex = "^0.1"
#mdbook-mermaid = "0.8.3"
#mdbook-latex = "0.1.3"
#mdbook-latex = { git = "https://github.com/lbeckman314/mdbook-latex.git", branch = "master" }
#md2tex = { git = "https://github.com/lbeckman314/md2tex.git", branch = "master" }
[output.html]
[output.linkcheck]
optional = true
[[bin]]
name = "concat_chapters"
path = "tools/src/bin/concat_chapters.rs"
[[bin]]
name = "convert_quotes"
path = "tools/src/bin/convert_quotes.rs"
[[bin]]
name = "lfp"
path = "tools/src/bin/lfp.rs"
[[bin]]
name = "link2print"
path = "tools/src/bin/link2print.rs"
[[bin]]
name = "release_listings"
path = "tools/src/bin/release_listings.rs"
[[bin]]
name = "remove_hidden_lines"
path = "tools/src/bin/remove_hidden_lines.rs"
[[bin]]
name = "remove_links"
path = "tools/src/bin/remove_links.rs"
[[bin]]
name = "remove_markup"
path = "tools/src/bin/remove_markup.rs"

155
README.md
View File

@@ -1,3 +1,154 @@
# IT-Konzept
# WRI IT-Konzept
Dokumentation zur IT-Konzeption bei WERNER RI
<img src="src/assets/Logo_WERNER_RI.png" width="40%" loading="lazy"/>
This repository contains the text source documenting the "IT-Konzept at WERNER RI".
We will further reference to it as the `IT-Konzept`.
### TLDR
To conveniently read an online version of this documentation, please open the link to the
[IT-Konzept][IT-Konzept-Doc].
[IT-Konzept-Doc]: ./doc/en/index.html
### Requirements
Building the rendered text requires the program [mdBook] and its
helper tools. The consumed version should be ideally the same that
rust-lang/rust uses. Install this tools with:
```console
$ cargo install mdbook
```
This command will grep a suitable mdbook version with all needed
dependencies from [crates.io].
You may extend the call with
```console
$ cargo install mdbook mdbook-linkchecker mdbook-mermaid
```
This will enable us to make use of alinkchecker to assure, that
the used links inside the markdown source can resolve to valid
targets. mkbook-mermaid is a preprocessor for mdbook to add
mermaid.js support. We may use it to create graphs that visualize
some process flows.
[crates.io]: https://crates.io/search?q=mdbook
### Multilingual version of mdBook
The documentation aims to make translations as flawless as
possible.
We are using mainline `mdbook` with following extensions from the
[mdBook-i18n-helpers} crate.
This extension implements a multilingual extension, that consumes
The gettext / xgettext subsystem.
As an alternative, there does exist a patch-set for version v0.4.15 that
adds the needed salt to organize a book as a
multilingual structure: All sources stored in a single hierarchical
code tree. This work isn't finished yet, but good enough to make
use of this branch for productive needs. Thank you [Nutomic
and Ruin0x11][mdbook localization].
### Cargo handled README
The README.md file you are reading now is auto generated via the
[cargo-readme] crate. It resolves rust `doc comments` in
`src/lib.rs` and generates the parsed code to a target README.md
file. The installation is optional.
[cargo-readme]: https://github.com/livioribeiro/cargo-readme
You need it, if you make changes to `src/lib.rs` and want to
update or regenerate the target README like that:
```console
$ cargo install cargo-readme
$ cargo readme > README.md
```
[mdBook]: https://github.com/rust-lang-nursery/mdBook
[mdBook-i18n-helpers]: https://github.com/google/mdbook-i18n-helpers
[mdBook localization]: https://github.com/Ruin0x11/mdbook/tree/localization
### Building
#### Building the documentation
To build the documentation with the default language (here: 'de'), change
into projects root directory and type:
```console
$ mdbook build --dest-dir doc/de
```
The rendered HTML output will be placed underneath the
`doc/de` subdirectory. To check it out, open it in your web
browser.
_Firefox:_
```console
$ firefox doc/de/index.html # Linux
$ open -a "Firefox" doc/de/index.html # OS X
$ Start-Process "firefox.exe" .\doc\de\index.html # Windows (PowerShell)
$ start firefox.exe .\doc\de\index.html # Windows (Cmd)
```
_Chrome:_
```console
$ google-chrome doc/en/index.html # Linux
$ open -a "Google Chrome" doc/en/index.html # OS X
$ Start-Process "chrome.exe" .\doc\en\index.html # Windows (PowerShell)
$ start chrome.exe .\doc\en\index.html # Windows (Cmd)
```
Executing `mdbook serve` will have **mdbook** act has a web service
which can be accessed opening the following URL: http://localhost:3000.
To run the tests:
```console
$ mdbook test
```
#### Building a language variant of the book
Translated version of the book will be placed inside the code tree
in the subdirectory `src/<language id`.
E.g. if you like to render the english version (language id: 'en'), change
into documents root directory and type:
```console
$ mdbook build --dest-dir doc/en --open
```
The rendered HTML output will be placed underneath the
`doc/en` subdirectory. Since we appended the `--open` parameter, your default browser should be fired up and ... tada!
### Spellchecking
To scan source files for spelling errors, you can use the `spellcheck.sh`
script. It needs a dictionary of valid words, which is provided in
`dictionary.txt`. If the script produces a false positive (say, you used word
`BTreeMap` which the script considers invalid), you need to add this word to
`dictionary.txt` (keep the sorted order for consistency).
### License
<!-- License source -->
[Logo-CC_BY]: https://i.creativecommons.org/l/by/4.0/88x31.png "Creative Common Logo"
[License-CC_BY]: https://creativecommons.org/licenses/by/4.0/legalcode "Creative Common License"
This work is licensed under a [Creative Common License 4.0][License-CC_BY]
![Creative Common Logo][Logo-CC_BY]
© 2025 Ralf Zerres

78
book.toml Normal file
View File

@@ -0,0 +1,78 @@
[book]
title = "WERNER RI: IT-Konzept"
description = "Documentation zur IT-Infrastructure bei WERNER RI ."
authors = [
"Ralf Zerres <ralf.zerres@networkx.de>",
]
[rust]
edition = "2021"
[build]
build-dir = "doc"
create-missing = false
[output.html]
additional-css = ["theme/2020-edition.css"]
#additional-js = ["ferries.js", "mermaid.min.js", "mermaid-init.js"]
cname = "portal-it-structure.rs"
copy-fonts = true
output.html.smart-punctuation = true
default-theme = "rustkZ"
edit-url-template = "https://gitea.networkx.de/WRI/IT-Konzept/tree/man/book/{path}"
git-repository-url = "https://gitea.networkx.de/WRI/IT-Konzept/tree/man/book"
git-repository-icon = "fa-github"
#input-404 = "404.md"
prefered-dark-theme = "navy"
mathjax-support = true
[output.html.fold]
enable = true
level = 0
[output.html.playground]
editable = true
line-numbers = true
#[output.html.redirect]
#"/format/config.html" = "configuration/index.html"
[output.html.search]
limit-results = 20
use-boolean-and = true
boost-title = 2
boost-hierarchy = 2
boost-paragraph = 1
expand = true
heading-split-level = 2
#[output.linkcheck]
## Should we check links on the internet? Enabling this option adds a
## non-negligible performance impact
#follow-web-links = false
## Are we allowed to link to files outside of the book's root directory? This
## may help prevent linking to sensitive files (e.g. "../../../../etc/shadow")
#traverse-parent-directories = true
[preprocessor]
[preprocessor.extended-markdown-table]
[preprocessor.gettext]
after = ["links"]
#[preprocessor.mermaid]
#command = "mdbook-mermaid"
#[language.en]
#"name = "English"
#[language.de]
#name = "Deutsch"
#title = "WERNER RI: Dokumentation zur IT-Konzept"
#description = "Dokumentation der Konzeption zur Bereitstellung der IT-Infrastruktur bei WERNER RI."
#input-404 = "de/404.md"
#[build]
#extra-watch-dirs = ["po"]

3
src/404.md Normal file
View File

@@ -0,0 +1,3 @@
# Document not found (404)
This URL is invalid, sorry. Try the search instead!

17
src/SUMMARY.md Normal file
View File

@@ -0,0 +1,17 @@
# WERNER RI: IT-Konzept
[Zusammenfassung](title-page.md)
[Einleitung](ch00-00-introduction.md)
- [IT-Infrastructure](ch01-00-it-structure.md)
- [Firewall](ch01-01-firewall.md)
- [Network-Infrastructure](ch01-02-network-infrastructure.md)
- [Server-Structure](ch01-03-server-structure.md)
- [Virtualization-Structure](ch01-04-virtualization-structure.md)
- [Storage-Structure](ch01-05-storage-structure.md)
- [Backup and Recovery](ch02-00-backup-and-recovery-structure.md)
- [Inhouse Backup](ch02-01-inhouse-backup.md)
- [Remote Backup](ch02-02-remote-backup.md)

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.8 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

BIN
src/assets/GSM_overview.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 127 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

BIN
src/assets/Logo_WRI.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 676 KiB

BIN
src/assets/Logo_WRI.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.0 KiB

BIN
src/assets/Lupe.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

View File

@@ -0,0 +1,819 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Sodipodi ("http://www.sodipodi.com/") -->
<svg
id="svg606"
viewBox="0 0 18700 18600"
sodipodi:version="0.34"
sodipodi:docname="Manification_glass.svg"
version="1.1"
inkscape:version="1.1 (c4e8f9ed74, 2021-05-24)"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:dc="http://purl.org/dc/elements/1.1/">
<defs
id="defs1129">
<linearGradient
id="linearGradient3653"
y2=".53906"
y1=".53908"
x2=".86047"
x1="-.24807">
<stop
id="stop1137"
style="stop-color:#2e97af"
offset="0" />
<stop
id="stop1139"
style="stop-color:#ffffff"
offset="1" />
</linearGradient>
<linearGradient
id="linearGradient3561"
y2="0.25"
y1=".42969"
x2="0.416"
x1="0.224">
<stop
id="stop3559"
style="stop-color:#57575a"
offset="0" />
<stop
id="stop3560"
style="stop-color:#ffffff"
offset="1" />
</linearGradient>
<linearGradient
id="linearGradient3562"
y2=".51563"
y1=".52344"
x2="0.664"
x1="0.392">
<stop
id="stop3656"
style="stop-color:#ffffff;stop-opacity:0"
offset="0" />
<stop
id="stop3657"
style="stop-color:#7d8787;stop-opacity:.1451"
offset="1" />
</linearGradient>
<linearGradient
id="linearGradient3653-3"
y2=".53906"
y1=".53908"
x2=".86047"
x1="-.24807">
<stop
id="stop3651"
style="stop-color:#2e97af"
offset="0" />
<stop
id="stop3652"
style="stop-color:#ffffff"
offset="1" />
</linearGradient>
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient3562"
id="linearGradient1135"
gradientUnits="userSpaceOnUse"
x1="4729.7671"
y1="166.88583"
x2="6481.4946"
y2="9675.8418" />
</defs>
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
showgrid="false"
inkscape:zoom="0.040483871"
inkscape:cx="3680.4781"
inkscape:cy="9312.3506"
inkscape:window-width="2560"
inkscape:window-height="1339"
inkscape:window-x="0"
inkscape:window-y="32"
inkscape:window-maximized="1"
inkscape:current-layer="g3556" />
<g
id="g3556"
style="fill:url(#linearGradient3653-3)"
transform="translate(59.236 .0021737)">
<path
id="path3557"
style="fill:url(#linearGradient1135);fill-opacity:1"
d="m5885 11257l-729-50-700-142-663-231-620-311-571-386-516-454-454-516-386-571-311-620-231-663-142-700-49-728 49-729 142-700 231-663 311-620 386-571 454-516 516-454 571-386 620-311 663-231 700-142 729-49 728 49 700 142 663 231 620 311 571 386 516 454 454 516 386 571 311 620 231 663 142 700 50 729-50 728-142 700-231 663-311 620-386 571-454 516-516 454-571 386-620 311-663 231-700 142-728 50zm-360-361l-730-50-699-142-663-231-621-311-571-386-516-454-454-516-386-571-311-620-231-663-142-699-49-728 49-729 142-699 231-663 311-621 386-571 454-516 516-455 571-386 621-311 663-230 699-143 730-49 729 49 699 142 663 231 620 311 571 386 516 454 454 516 386 571 311 621 230 663 143 699 49 730-50 729-142 699-231 663-311 620-386 571-454 516-516 454-571 386-620 311-663 230-699 143-728 49z" />
</g>
<path
id="path3643"
style="fill-rule:evenodd;fill:#bebfbf"
d="m11267 5887.6c0 2995.7-2409.4 5424.4-5382 5424.4s-5382.4-2428.7-5382.4-5424.4c0.03-2995.7 2409.8-5424.2 5382.4-5424.2 2972.6-0.01 5382 2428.5 5382 5424.2zm-293-314.1c0 2984.1-2419 5403.5-5403.2 5403.5-2984.1 0-5403.3-2419.4-5403.3-5403.5 0.04-2984.2 2419.2-5403.3 5403.3-5403.3 2984.2-0.01 5403.2 2419.1 5403.2 5403.3z" />
<g
id="g746"
style="fill:#000000"
transform="translate(.0032298 41.888)">
<polygon
id="polygon747"
style="fill:#000000"
points="5885 11257 5879 11332 5140 11281 5156 11207 5171 11132 5890 11181" />
<polygon
id="polygon748"
style="fill:#000000"
points="5156 11207 5140 11281 4430 11136 4456 11065 4481 10993 5171 11132" />
<polygon
id="polygon749"
style="fill:#000000"
points="4456 11065 4430 11136 3758 10901 3793 10834 3827 10766 4481 10993" />
<polygon
id="polygon750"
style="fill:#000000"
points="3793 10834 3758 10901 3130 10585 3173 10523 3215 10460 3827 10766" />
<polygon
id="polygon751"
style="fill:#000000"
points="3173 10523 3130 10585 2551 10194 2602 10137 2652 10079 3215 10460" />
<polygon
id="polygon752"
style="fill:#000000"
points="2602 10137 2551 10194 2028 9733 2086 9683 2143 9632 2652 10079" />
<polygon
id="polygon753"
style="fill:#000000"
points="2086 9683 2028 9733 1569 9209 1632 9167 1694 9124 2143 9632" />
<polygon
id="polygon754"
style="fill:#000000"
points="1632 9167 1569 9209 1178 8630 1246 8596 1313 8561 1694 9124" />
<polygon
id="polygon755"
style="fill:#000000"
points="1246 8596 1178 8630 863 8001 935 7976 1006 7950 1313 8561" />
<polygon
id="polygon756"
style="fill:#000000"
points="935 7976 863 8001 629 7328 704 7313 778 7297 1006 7950" />
<polygon
id="polygon757"
style="fill:#000000"
points="704 7313 629 7328 486 6618 562 6613 637 6607 778 7297" />
<polygon
id="polygon758"
style="fill:#000000"
points="562 6613 486 6618 437 5879 513 5885 588 5890 637 6607" />
<polygon
id="polygon759"
style="fill:#000000"
points="513 5885 437 5879 487 5140 562 5156 636 5171 588 5890" />
<polygon
id="polygon760"
style="fill:#000000"
points="562 5156 487 5140 632 4430 704 4456 775 4481 636 5171" />
<polygon
id="polygon761"
style="fill:#000000"
points="704 4456 632 4430 867 3758 935 3793 1002 3827 775 4481" />
<polygon
id="polygon762"
style="fill:#000000"
points="935 3793 867 3758 1183 3130 1246 3173 1308 3215 1002 3827" />
<polygon
id="polygon763"
style="fill:#000000"
points="1246 3173 1183 3130 1574 2551 1632 2602 1689 2652 1308 3215" />
<polygon
id="polygon764"
style="fill:#000000"
points="1632 2602 1574 2551 2035 2028 2086 2086 2136 2143 1689 2652" />
<polygon
id="polygon765"
style="fill:#000000"
points="2086 2086 2035 2028 2559 1569 2602 1632 2644 1694 2136 2143" />
<polygon
id="polygon766"
style="fill:#000000"
points="2602 1632 2559 1569 3138 1178 3173 1246 3207 1313 2644 1694" />
<polygon
id="polygon767"
style="fill:#000000"
points="3173 1246 3138 1178 3767 863 3793 935 3818 1006 3207 1313" />
<polygon
id="polygon768"
style="fill:#000000"
points="3793 935 3767 863 4440 629 4456 704 4471 778 3818 1006" />
<polygon
id="polygon769"
style="fill:#000000"
points="4456 704 4440 629 5150 486 5156 562 5161 637 4471 778" />
<polygon
id="polygon770"
style="fill:#000000"
points="5156 562 5150 486 5890 437 5885 513 5879 588 5161 637" />
<polygon
id="polygon771"
style="fill:#000000"
points="5885 513 5890 437 6628 487 6613 562 6597 636 5879 588" />
<polygon
id="polygon772"
style="fill:#000000"
points="6613 562 6628 487 7338 632 7313 704 7287 775 6597 636" />
<polygon
id="polygon773"
style="fill:#000000"
points="7313 704 7338 632 8010 867 7976 935 7941 1002 7287 775" />
<polygon
id="polygon774"
style="fill:#000000"
points="7976 935 8010 867 8638 1183 8596 1246 8553 1308 7941 1002" />
<polygon
id="polygon775"
style="fill:#000000"
points="8596 1246 8638 1183 9217 1574 9167 1632 9116 1689 8553 1308" />
<polygon
id="polygon776"
style="fill:#000000"
points="9167 1632 9217 1574 9740 2035 9683 2086 9625 2136 9116 1689" />
<polygon
id="polygon777"
style="fill:#000000"
points="9683 2086 9740 2035 10199 2559 10137 2602 10074 2644 9625 2136" />
<polygon
id="polygon778"
style="fill:#000000"
points="10137 2602 10199 2559 10590 3138 10523 3173 10455 3207 10074 2644" />
<polygon
id="polygon779"
style="fill:#000000"
points="10523 3173 10590 3138 10905 3767 10834 3793 10762 3818 10455 3207" />
<polygon
id="polygon780"
style="fill:#000000"
points="10834 3793 10905 3767 11139 4440 11065 4456 10990 4471 10762 3818" />
<polygon
id="polygon781"
style="fill:#000000"
points="11065 4456 11139 4440 11282 5150 11207 5156 11131 5161 10990 4471" />
<polygon
id="polygon782"
style="fill:#000000"
points="11207 5156 11282 5150 11332 5890 11257 5885 11181 5879 11131 5161" />
<polygon
id="polygon783"
style="fill:#000000"
points="11257 5885 11332 5890 11281 6628 11207 6613 11132 6597 11181 5879" />
<polygon
id="polygon784"
style="fill:#000000"
points="11207 6613 11281 6628 11136 7338 11065 7313 10993 7287 11132 6597" />
<polygon
id="polygon785"
style="fill:#000000"
points="11065 7313 11136 7338 10901 8010 10834 7976 10766 7941 10993 7287" />
<polygon
id="polygon786"
style="fill:#000000"
points="10834 7976 10901 8010 10585 8638 10523 8596 10460 8553 10766 7941" />
<polygon
id="polygon787"
style="fill:#000000"
points="10523 8596 10585 8638 10194 9217 10137 9167 10079 9116 10460 8553" />
<polygon
id="polygon788"
style="fill:#000000"
points="10137 9167 10194 9217 9733 9740 9683 9683 9632 9625 10079 9116" />
<polygon
id="polygon789"
style="fill:#000000"
points="9683 9683 9733 9740 9209 10199 9167 10137 9124 10074 9632 9625" />
<polygon
id="polygon790"
style="fill:#000000"
points="9167 10137 9209 10199 8630 10590 8596 10523 8561 10455 9124 10074" />
<polygon
id="polygon791"
style="fill:#000000"
points="8596 10523 8630 10590 8001 10905 7976 10834 7950 10762 8561 10455" />
<polygon
id="polygon792"
style="fill:#000000"
points="7976 10834 8001 10905 7328 11139 7313 11065 7297 10990 7950 10762" />
<polygon
id="polygon793"
style="fill:#000000"
points="7313 11065 7328 11139 6618 11282 6613 11207 6607 11131 7297 10990" />
<polygon
id="polygon794"
style="fill:#000000"
points="6613 11207 6618 11282 5879 11332 5885 11257 5890 11181 6607 11131" />
<polygon
id="polygon795"
style="fill:#000000"
points="5525 10896 5519 10971 4779 10920 4795 10846 4810 10771 5530 10820" />
<polygon
id="polygon796"
style="fill:#000000"
points="4795 10846 4779 10920 4070 10775 4096 10704 4121 10632 4810 10771" />
<polygon
id="polygon797"
style="fill:#000000"
points="4096 10704 4070 10775 3398 10540 3433 10473 3467 10405 4121 10632" />
<polygon
id="polygon798"
style="fill:#000000"
points="3433 10473 3398 10540 2769 10224 2812 10162 2854 10099 3467 10405" />
<polygon
id="polygon799"
style="fill:#000000"
points="2812 10162 2769 10224 2190 9833 2241 9776 2291 9718 2854 10099" />
<polygon
id="polygon800"
style="fill:#000000"
points="2241 9776 2190 9833 1667 9372 1725 9322 1782 9271 2291 9718" />
<polygon
id="polygon801"
style="fill:#000000"
points="1725 9322 1667 9372 1208 8848 1271 8806 1333 8763 1782 9271" />
<polygon
id="polygon802"
style="fill:#000000"
points="1271 8806 1208 8848 817 8269 885 8235 952 8200 1333 8763" />
<polygon
id="polygon803"
style="fill:#000000"
points="885 8235 817 8269 502 7640 574 7615 645 7589 952 8200" />
<polygon
id="polygon804"
style="fill:#000000"
points="574 7615 502 7640 268 6967 343 6952 417 6936 645 7589" />
<polygon
id="polygon805"
style="fill:#000000"
points="343 6952 268 6967 125 6258 201 6253 276 6247 417 6936" />
<polygon
id="polygon806"
style="fill:#000000"
points="201 6253 125 6258 76 5519 152 5525 227 5530 276 6247" />
<polygon
id="polygon807"
style="fill:#000000"
points="152 5525 76 5519 126 4780 201 4796 275 4811 227 5530" />
<polygon
id="polygon808"
style="fill:#000000"
points="201 4796 126 4780 271 4071 343 4097 414 4122 275 4811" />
<polygon
id="polygon809"
style="fill:#000000"
points="343 4097 271 4071 506 3399 574 3434 641 3468 414 4122" />
<polygon
id="polygon810"
style="fill:#000000"
points="574 3434 506 3399 822 2770 885 2813 947 2855 641 3468" />
<polygon
id="polygon811"
style="fill:#000000"
points="885 2813 822 2770 1213 2191 1271 2242 1328 2292 947 2855" />
<polygon
id="polygon812"
style="fill:#000000"
points="1271 2242 1213 2191 1674 1668 1725 1726 1775 1783 1328 2292" />
<polygon
id="polygon813"
style="fill:#000000"
points="1725 1726 1674 1668 2198 1208 2241 1271 2283 1333 1775 1783" />
<polygon
id="polygon814"
style="fill:#000000"
points="2241 1271 2198 1208 2777 817 2812 885 2846 952 2283 1333" />
<polygon
id="polygon815"
style="fill:#000000"
points="2812 885 2777 817 3408 502 3433 574 3457 645 2846 952" />
<polygon
id="polygon816"
style="fill:#000000"
points="3433 574 3408 502 4080 269 4096 344 4111 418 3457 645" />
<polygon
id="polygon817"
style="fill:#000000"
points="4096 344 4080 269 4789 125 4795 201 4800 276 4111 418" />
<polygon
id="polygon818"
style="fill:#000000"
points="4795 201 4789 125 5530 76 5525 152 5519 227 4800 276" />
<polygon
id="polygon819"
style="fill:#000000"
points="5525 152 5530 76 6269 126 6254 201 6238 275 5519 227" />
<polygon
id="polygon820"
style="fill:#000000"
points="6254 201 6269 126 6978 271 6953 343 6927 414 6238 275" />
<polygon
id="polygon821"
style="fill:#000000"
points="6953 343 6978 271 7650 506 7616 574 7581 641 6927 414" />
<polygon
id="polygon822"
style="fill:#000000"
points="7616 574 7650 506 8278 822 8236 885 8193 947 7581 641" />
<polygon
id="polygon823"
style="fill:#000000"
points="8236 885 8278 822 8857 1213 8807 1271 8756 1328 8193 947" />
<polygon
id="polygon824"
style="fill:#000000"
points="8807 1271 8857 1213 9380 1674 9323 1725 9265 1775 8756 1328" />
<polygon
id="polygon825"
style="fill:#000000"
points="9323 1725 9380 1674 9839 2198 9777 2241 9714 2283 9265 1775" />
<polygon
id="polygon826"
style="fill:#000000"
points="9777 2241 9839 2198 10230 2777 10163 2812 10095 2846 9714 2283" />
<polygon
id="polygon827"
style="fill:#000000"
points="10163 2812 10230 2777 10545 3408 10474 3433 10402 3457 10095 2846" />
<polygon
id="polygon828"
style="fill:#000000"
points="10474 3433 10545 3408 10778 4080 10704 4096 10629 4111 10402 3457" />
<polygon
id="polygon829"
style="fill:#000000"
points="10704 4096 10778 4080 10922 4789 10847 4795 10771 4800 10629 4111" />
<polygon
id="polygon830"
style="fill:#000000"
points="10847 4795 10922 4789 10971 5530 10896 5525 10820 5519 10771 4800" />
<polygon
id="polygon831"
style="fill:#000000"
points="10896 5525 10971 5530 10920 6269 10846 6254 10771 6238 10820 5519" />
<polygon
id="polygon832"
style="fill:#000000"
points="10846 6254 10920 6269 10775 6978 10704 6953 10632 6927 10771 6238" />
<polygon
id="polygon833"
style="fill:#000000"
points="10704 6953 10775 6978 10540 7650 10473 7616 10405 7581 10632 6927" />
<polygon
id="polygon834"
style="fill:#000000"
points="10473 7616 10540 7650 10224 8278 10162 8236 10099 8193 10405 7581" />
<polygon
id="polygon835"
style="fill:#000000"
points="10162 8236 10224 8278 9833 8857 9776 8807 9718 8756 10099 8193" />
<polygon
id="polygon836"
style="fill:#000000"
points="9776 8807 9833 8857 9372 9380 9322 9323 9271 9265 9718 8756" />
<polygon
id="polygon837"
style="fill:#000000"
points="9322 9323 9372 9380 8848 9839 8806 9777 8763 9714 9271 9265" />
<polygon
id="polygon838"
style="fill:#000000"
points="8806 9777 8848 9839 8269 10230 8235 10163 8200 10095 8763 9714" />
<polygon
id="polygon839"
style="fill:#000000"
points="8235 10163 8269 10230 7639 10545 7615 10474 7590 10402 8200 10095" />
<polygon
id="polygon840"
style="fill:#000000"
points="7615 10474 7639 10545 6967 10778 6952 10704 6936 10629 7590 10402" />
<polygon
id="polygon841"
style="fill:#000000"
points="6952 10704 6967 10778 6258 10922 6253 10847 6247 10771 6936 10629" />
<polygon
id="polygon842"
style="fill:#000000"
points="6253 10847 6258 10922 5519 10971 5525 10896 5530 10820 6247 10771" />
</g>
<g
id="g843"
style="fill:rgb(153,153,153)">
<polygon
id="polygon844"
points="9151 10211 9077 10091 9069 9921 9121 9716 9230 9492 9392 9266 9586 9067 9786 8921 9978 8834 10146 8812 10277 8864 10350 8983 10358 9153 10306 9358 10197 9581 10036 9808 9841 10006 9641 10153 9449 10240 9281 10261" />
</g>
<g
id="g845"
style="fill:rgb(0,0,0)">
<polygon
id="polygon846"
points="9129 10224 9055 10104 9055 10104 9053 10100 9051 10092 9077 10091 9098 10077 9172 10197 9151 10211" />
<polygon
id="polygon847"
points="9051 10092 9043 9922 9043 9922 9044 9914 9069 9921 9094 9919 9102 10089 9077 10091" />
<polygon
id="polygon848"
points="9044 9914 9096 9709 9096 9709 9098 9704 9121 9716 9145 9722 9093 9927 9069 9921" />
<polygon
id="polygon849"
points="9121 9716 9098 9704 9209 9477 9230 9492 9250 9506 9143 9727" />
<polygon
id="polygon850"
points="9230 9492 9209 9477 9373 9248 9392 9266 9410 9283 9250 9506" />
<polygon
id="polygon851"
points="9392 9266 9373 9248 9570 9046 9586 9067 9601 9087 9410 9283" />
<polygon
id="polygon852"
points="9570 9046 9770 8900 9770 8900 9775 8897 9786 8921 9801 8941 9601 9087 9586 9067" />
<polygon
id="polygon853"
points="9775 8897 9967 8810 9967 8810 9974 8808 9978 8834 9988 8857 9796 8944 9786 8921" />
<polygon
id="polygon854"
points="9974 8808 10142 8786 10142 8786 10146 8786 10155 8788 10146 8812 10149 8837 9981 8859 9978 8834" />
<polygon
id="polygon855"
points="10155 8788 10286 8840 10286 8840 10290 8842 10293 8844 10298 8850 10277 8864 10267 8887 10136 8835 10146 8812" />
<polygon
id="polygon856"
points="10298 8850 10371 8969 10371 8969 10373 8973 10375 8981 10350 8983 10328 8996 10255 8877 10277 8864" />
<polygon
id="polygon857"
points="10375 8981 10383 9151 10383 9151 10382 9159 10358 9153 10332 9154 10324 8984 10350 8983" />
<polygon
id="polygon858"
points="10382 9159 10330 9364 10330 9364 10328 9369 10306 9358 10281 9351 10333 9146 10358 9153" />
<polygon
id="polygon859"
points="10306 9358 10328 9369 10217 9595 10197 9581 10176 9566 10283 9346" />
<polygon
id="polygon860"
points="10197 9581 10217 9595 10054 9825 10036 9808 10017 9790 10176 9566" />
<polygon
id="polygon861"
points="10036 9808 10054 9825 9856 10026 9841 10006 9825 9985 10017 9790" />
<polygon
id="polygon862"
points="9856 10026 9656 10173 9656 10173 9651 10176 9641 10153 9625 10132 9825 9985 9841 10006" />
<polygon
id="polygon863"
points="9651 10176 9459 10263 9459 10263 9452 10265 9449 10240 9438 10216 9630 10129 9641 10153" />
<polygon
id="polygon864"
points="9452 10265 9284 10286 9284 10286 9279 10286 9271 10284 9281 10261 9277 10235 9445 10214 9449 10240" />
<polygon
id="polygon865"
points="9271 10284 9141 10234 9141 10234 9138 10233 9134 10230 9129 10224 9151 10211 9160 10187 9290 10237 9281 10261" />
</g>
<g
id="g866"
style="fill:rgb(153,153,153)">
<polygon
id="polygon867"
points="9292 10352 9218 10231 9209 10061 9262 9857 9370 9634 9532 9408 9726 9209 9927 9063 10120 8976 10289 8954 10421 9006 10494 9126 10502 9296 10450 9500 10341 9723 10180 9949 9985 10148 9784 10294 9592 10381 9423 10403" />
</g>
<polygon
id="polygon869"
points="9270 10365 9196 10244 9196 10244 9194 10240 9192 10232 9218 10231 9239 10217 9313 10338 9292 10352" />
<polygon
id="polygon870"
points="9192 10232 9183 10062 9183 10062 9184 10054 9209 10061 9234 10059 9243 10229 9218 10231" />
<polygon
id="polygon871"
points="9184 10054 9237 9850 9237 9850 9239 9845 9262 9857 9286 9863 9233 10067 9209 10061" />
<polygon
id="polygon872"
points="9262 9857 9239 9845 9349 9619 9370 9634 9390 9648 9284 9868" />
<polygon
id="polygon873"
points="9370 9634 9349 9619 9513 9390 9532 9408 9550 9425 9390 9648" />
<polygon
id="polygon874"
points="9532 9408 9513 9390 9711 9188 9726 9209 9740 9229 9550 9425" />
<polygon
id="polygon875"
points="9711 9188 9912 9042 9912 9042 9916 9039 9927 9063 9941 9083 9740 9229 9726 9209" />
<polygon
id="polygon876"
points="9916 9039 10109 8952 10109 8952 10116 8950 10120 8976 10130 8999 9937 9086 9927 9063" />
<polygon
id="polygon877"
points="10116 8950 10285 8928 10285 8928 10289 8928 10298 8930 10289 8954 10292 8979 10123 9001 10120 8976" />
<polygon
id="polygon878"
points="10298 8930 10430 8982 10430 8982 10434 8984 10437 8986 10442 8992 10421 9006 10411 9029 10279 8977 10289 8954" />
<polygon
id="polygon879"
points="10442 8992 10515 9112 10515 9112 10517 9116 10519 9124 10494 9126 10472 9139 10399 9019 10421 9006" />
<polygon
id="polygon880"
points="10519 9124 10527 9294 10527 9294 10526 9302 10502 9296 10476 9297 10468 9127 10494 9126" />
<polygon
id="polygon881"
points="10526 9302 10474 9506 10474 9506 10472 9511 10450 9500 10425 9493 10477 9289 10502 9296" />
<polygon
id="polygon882"
points="10450 9500 10472 9511 10361 9737 10341 9723 10320 9708 10427 9488" />
<polygon
id="polygon883"
points="10341 9723 10361 9737 10198 9966 10180 9949 10161 9931 10320 9708" />
<polygon
id="polygon884"
points="10180 9949 10198 9966 9999 10168 9985 10148 9970 10127 10161 9931" />
<polygon
id="polygon885"
points="9999 10168 9798 10314 9798 10314 9794 10317 9784 10294 9769 10273 9970 10127 9985 10148" />
<polygon
id="polygon886"
points="9794 10317 9602 10404 9602 10404 9595 10406 9592 10381 9581 10357 9773 10270 9784 10294" />
<polygon
id="polygon887"
points="9595 10406 9426 10428 9426 10428 9422 10428 9413 10426 9423 10403 9419 10377 9588 10355 9592 10381" />
<polygon
id="polygon888"
points="9413 10426 9282 10375 9282 10375 9279 10373 9275 10371 9270 10365 9292 10352 9301 10328 9432 10379 9423 10403" />
<path
id="path3649"
sodipodi:nodetypes="cccccccccccc"
style="fill-rule:evenodd;fill:url(#linearGradient3561)"
d="m9529.1 10411l7539.9 8074 555-207 324-312 492-479 68-218 68-219-8021-7842.9-277 112.5-340.6 301.1-319.3 342.9-89 447.4z" />
<polygon
id="polygon1091"
points="17084 18485 17046 18519 16436 17862 16474 17828 16511 17793 17121 18450" />
<polygon
id="polygon1092"
points="16474 17828 16436 17862 15823 17203 15861 17169 15898 17134 16511 17793" />
<polygon
id="polygon1093"
points="15861 17169 15823 17203 15206 16542 15244 16508 15281 16473 15898 17134" />
<polygon
id="polygon1094"
points="15244 16508 15206 16542 14585 15877 14623 15843 14660 15808 15281 16473" />
<polygon
id="polygon1095"
points="14623 15843 14585 15877 13960 15209 13998 15175 14035 15140 14660 15808" />
<polygon
id="polygon1096"
points="13998 15175 13960 15209 13331 14539 13369 14505 13406 14470 14035 15140" />
<polygon
id="polygon1097"
points="13369 14505 13331 14539 12697 13865 12735 13831 12772 13796 13406 14470" />
<polygon
id="polygon1098"
points="12735 13831 12697 13865 12059 13188 12097 13154 12134 13119 12772 13796" />
<path
id="path3570"
d="m12097 13154l-38 34-642-680 38-35 37-36 642 682-37 35z" />
<polygon
id="polygon1100"
points="11455 12473 11417 12508 10770 11824 10808 11789 10845 11753 11492 12437" />
<polygon
id="polygon1101"
points="10808 11789 10770 11824 10120 11137 10157 11102 10193 11066 10845 11753" />
<polygon
id="polygon1102"
points="10120 11137 9464 10447 9464 10447 9458 10440 9454 10433 9451 10425 9450 10408 9501 10412 9537 10376 10193 11066 10157 11102" />
<polygon
id="polygon1103"
points="9450 10408 9466 10184 9466 10184 9467 10175 9474 10159 9517 10188 9567 10191 9551 10415 9501 10412" />
<polygon
id="polygon1104"
points="9517 10188 9474 10159 9703 9816 9741 9851 9778 9885 9559 10216" />
<polygon
id="polygon1105"
points="9741 9851 9703 9816 10029 9467 10061 9507 10092 9546 9778 9885" />
<polygon
id="polygon1106"
points="10029 9467 10337 9219 10337 9219 10343 9214 10357 9209 10369 9259 10400 9298 10092 9546 10061 9507" />
<polygon
id="polygon1107"
points="10357 9209 10542 9165 10542 9165 10550 9164 10559 9164 10568 9166 10576 9169 10590 9179 10554 9215 10565 9264 10380 9308 10369 9259" />
<polygon
id="polygon1108"
points="10554 9215 10590 9179 11243 9853 11207 9890 11170 9926 10517 9250" />
<path
id="path3569"
d="m11207 9890l36-37 669 671-35 37-36 36-671-671 37-36z" />
<path
id="path3601"
d="m11877 10561l35-37 681 667-35 37-36 36-681-667 36-36z" />
<path
id="path3600"
d="m12558 11228l35-37 689 663-35 37-36 36-689-663 36-36z" />
<path
id="path3566"
d="m13247 11891l35-37 693 658-35 38-36 37-693-660 36-36z" />
<polygon
id="polygon1113"
points="13940 12550 13975 12512 14668 13169 14633 13207 14597 13244 13904 12587" />
<polygon
id="polygon1114"
points="14633 13207 14668 13169 15356 13822 15321 13859 15285 13895 14597 13244" />
<polygon
id="polygon1115"
points="15321 13859 15356 13822 16036 14472 16001 14509 15965 14545 15285 13895" />
<polygon
id="polygon1116"
points="16001 14509 16036 14472 16703 15119 16668 15156 16632 15192 15965 14545" />
<polygon
id="polygon1117"
points="16668 15156 16703 15119 17356 15764 17320 15800 17283 15835 16632 15192" />
<polygon
id="polygon1118"
points="17320 15800 17356 15764 17987 16406 17951 16442 17914 16477 17283 15835" />
<polygon
id="polygon1119"
points="17987 16406 18595 17046 18595 17046 18600 17052 18604 17059 18607 17065 18609 17073 18609 17087 18559 17082 18522 17117 17914 16477 17951 16442" />
<polygon
id="polygon1120"
points="18609 17087 18586 17283 18586 17283 18584 17292 18577 17307 18536 17278 18485 17272 18508 17076 18559 17082" />
<polygon
id="polygon1121"
points="18536 17278 18577 17307 18351 17628 18315 17593 18278 17557 18494 17248" />
<polygon
id="polygon1122"
points="18315 17593 18351 17628 18010 17987 17977 17949 17943 17910 18278 17557" />
<polygon
id="polygon1123"
points="17977 17949 18010 17987 17632 18311 17605 18268 17577 18224 17943 17910" />
<polygon
id="polygon1124"
points="17632 18311 17306 18515 17306 18515 17298 18519 17282 18522 17279 18472 17251 18428 17577 18224 17605 18268" />
<polygon
id="polygon1125"
points="17282 18522 17087 18535 17087 18535 17079 18535 17072 18534 17065 18532 17058 18529 17046 18519 17084 18485 17080 18434 17275 18421 17279 18472" />
<g
id="g1126"
style="stroke:rgb(0,0,0);fill:none">
<polyline
id="polyline1127"
style="fill:none"
points="18014 18025 17771 18230 17535 18381 17324 18470 17152 18491 17037 18437 16996 18311 17031 18134 17135 17922 17299 17691 17517 17457 17759 17251 17995 17101 18206 17012 18377 16991 18493 17046 18534 17171 18499 17347 18396 17559 18231 17790 18014 18025" />
</g>
<path
id="path3658"
sodipodi:nodetypes="cccc"
style="fill-rule:evenodd;fill:url(#linearGradient3562)"
d="m17036 18389c-19-263 201-678 531-977l-6713-5661 6182 6638z" />
<metadata
id="metadata197">
<rdf:RDF>
<cc:Work>
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<cc:license
rdf:resource="http://creativecommons.org/licenses/publicdomain/" />
<dc:publisher>
<cc:Agent
rdf:about="http://openclipart.org/">
<dc:title>Openclipart</dc:title>
</cc:Agent>
</dc:publisher>
<dc:date>2009-04-04T06:02:29</dc:date>
<dc:description>Magnifying glass icon by Greg. From old OCAL site.</dc:description>
<dc:source>https://openclipart.org/detail/24012/magnifying-glass-by-anonymous-24012</dc:source>
<dc:creator>
<cc:Agent>
<dc:title>Anonymous</dc:title>
</cc:Agent>
</dc:creator>
<dc:subject>
<rdf:Bag>
<rdf:li>find</rdf:li>
<rdf:li>glass</rdf:li>
<rdf:li>icon</rdf:li>
<rdf:li>magnifying</rdf:li>
<rdf:li>magnifying glass</rdf:li>
<rdf:li>optics</rdf:li>
<rdf:li>search</rdf:li>
<rdf:li>tool</rdf:li>
</rdf:Bag>
</dc:subject>
</cc:Work>
<cc:License
rdf:about="http://creativecommons.org/licenses/publicdomain/">
<cc:permits
rdf:resource="http://creativecommons.org/ns#Reproduction" />
<cc:permits
rdf:resource="http://creativecommons.org/ns#Distribution" />
<cc:permits
rdf:resource="http://creativecommons.org/ns#DerivativeWorks" />
</cc:License>
</rdf:RDF>
</metadata>
</svg>

After

Width:  |  Height:  |  Size: 31 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 969 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 996 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 263 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 115 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 191 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 996 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 233 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 245 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 374 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 233 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 2.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 228 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

BIN
src/assets/hdd.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

View File

@@ -0,0 +1,15 @@
# Einleitung
**Version**: 0.1.0<br>
**Stand**: 30.05.2025
![Logo WERNER RI](../assets/Logo_WRI.jpg)
<!-- [<img src="assets/Logo_WRI,jpg" width="720"/>](assets/Logo_WRI.jpg) -->
In diesem Konzept werden die wesentlichen Aufgaben und Strukturen
aufgezeigt, die einen sicheren, schnellen und produktiven Betrieb
einer IT Intrastruktur in der Kanzlei `WERNER RI` determinieren.
Logische Einheiten werden in Unterkapiteln gruppiert und dort
gesondert ausgeführt.

View File

@@ -0,0 +1,56 @@
# Firewall
Das Management einer `firewall` is einer der wichtigsten Konzeptionen
für die Bereitstellung einer sicheren IT-Infrastruktur bei
`WERNER RI`. Das Sicherheitsniveau wird maßgeblich durch die
kontinuierliche Warung und Aktualisierung der verwendeten Software determiniert.
Im Unterkapitel [Firewall Konfiguration](ch01-01-firewall.md) wird die
Struktur der notwendigen administrativen Aufgaben beschrieben.
# Netzwerk Infrastruktur
Die Implementierung der Netzwerk-Infrastruktur bei `WERNER RI` muss
sowohl die Geschwindigkeits als auch Sicherheits-Anforderungen der
Kanzlei berücksichtigen.
* Segment Kanzlei
* Administrations Segment
* Segment der externen Anbindung (`Remote Access & VPN`)
* Segment Private-Cloud
* Segment Core-Server
* Switching Komponenten
Das Unterkapitel
[NetworkInfrastrukture](ch01-02-network-infrastructure.md)
beschreibt die hierzu erforderlichen administrativen Aufgaben.
# Server Struktur
Im Unterkapitel [Server Struktur](ch01-03-server-structure.md) listet
und dokumentiert die Hardware Komponenten für die Bereitstellung des
physischen Speichers bei `WERNER RI`. Dies beinhaltet die Programme
und Daten der verwendeten Betriebssysteme, Transaktions- und
Bewegungsdaten der Prodoktivsysteme, als auch Backup-Daten.
# Virtualisierungs Struktur
Paravirtualisierung ist das zentrale Element zur Reduktion der
Komplexität und der Bereitstellung von Diensten und Applikationen.
Horozontal erweitarbare Konzepte von Shared-Storage Lösungen
(wie z.B. `glusterfs`, `ceph`, `Multi-Server Strorage Space Direct`)
würde die Komplexität für Einrichtung und Wartung erheblich erhöhen.
Sie würden nicht zu einer erhöten Betriebssicherheit unter
Berücksichtigung der derzeitigen Anforderungen bei `WERNER RI`
beitragen. qDennoch ermöglicht die gewählte Architektur eine spätere Erweitung der
Server-Umgebung (die bewußt auf ein
Hardwaresystem komprimiert ist) hin zu einem Server-Cluster.
Das Unterverzeichnis [Virtualisierungs Strukture](ch01-04-virtualization-structure.md) beschreibt die
hierzu erforderlichen administrativen Aufgaben.
# Backup/Recovery Struktur
Das Unterverzeichnis [Backup and Recovery](ch02-00-backup-and-recovery-structure.md)
dokumentiert die implementierte Lösung zur Realisation von internen und
externen Backup-Prozessen.

109
src/ch01-01-firewall.md Normal file
View File

@@ -0,0 +1,109 @@
# Firewall-Documentation
**Version**: 0.0.1<br>
**Stand**: 31.05.2025
Innerhalb des Netzwerks bei `WERNER RI` kommunizieren alle
Komponenten der Subnetze **ausschließlich** über das IP-Protokoll
vgl. [**`OSI-Modell`**][OSI_Modell]. Angebundene Komponenten (Server,
NAS-Systeme, Workstations, Mobile-Endgeräte) erhalten eine eindeutige
IP-Addresse. Die Identifikation erfolgt über eine 1:1 Zuordnung von
MAC-Adresse der Netzwerk-Karte des Endgeräts zur zugewiesenen
IP-Addresse.
* Server: statische Konfiguration
* Workstation: dynamische Konfiguration (via **dhcp**)
* Mobile-Endgeräte: dynamische Konfiguration (via **dhcp**)
Der DHCP-Server vergibt IP-Adressen an unbekannte MAC-Adressen
dynamisch aus einem IP-Pool. Dynamische Adressen haben eine zeitlich
begrenzten Gültigkeitszeitraum (**Lease Intervall**). Sie werden nach
Ablauf des Gültigkeitszeitraums wieder freigeben. MAC-Adressen von
administrierten Komponenten hingegen erhalten stets die vordefinierte
IP-Adresse.
Die Netzsegmente sind über einen IP-Namensplan differenziert
[**`CIDR`** Notation][CIDR_Notation]. Netzsegmente sind der in der
Firewall überwachten Schnittstellen (**Interfaces**) zugewiesen. Damit
können individuelle Regelsätze je Schnittstelle definiert werden, die
den Datenfluss regeln (**SRC-IP**, **SRC-Port**, **DST-IP**,
**DST-Port**).
Die Firewall ist als dediziete Hardware-Komponente umgesetzt. Die
Open-Source Software `OPNSense` realisiert die
Firewall-Funktionalität. Das Gerät verfügt über **sechs** physische
Netzwerk-Interfaces (je 1 GBit).
| Identifier | Wert | Interface |
|:-----------------------|:--------------|-------------------|
| Firewall-Software: | OPNSense | |
| Release: | 21.7 | |
| Betriebssystem: | FreeBSD | |
| Hostname: | gate | |
| IPv4-Adresse (WRI): | 10.12.10.254 | igb0 |
| IPv4-Adresse (VDSL): | 87.79.238.168 | pppoe->igb5_vlan7 |
| | |
|:--------------|:--------------------------------------------------|
| CPU: | Intel(R) Celeron(R) CPU N3350 @ 1.10GHz (2 cores) |
| RAM: | 16 GB |
| Medium: | Embedded USB |
| OS-Kapazität: | 28 GB |
### Netzsegment LAN
Alle Workstations und Server erhalten eine IP-Addresse aus dem
LAN-Segment. Dieses Segment ist an das Firewall `LAN` angebunden.
| | |
|:---------|:----------|
| Segment: | **LAN** |
| Service: | dns, dhcp |
| | |
### Netzsegment VPN
Eine kryptographisch abgesicherte Anbindung von mobilen Endgeräten
wird über die Software `wireguard` realisiert. Für jeden definierte
Endpunkt wird eine Schlüsselpaar neben einer Konfigurationsdatei
erzeugt. Im Serverdienst wird der öffentliche Schlüssel des Endgeräts
und dessen statische IP-Adresse eingestellt. Auf dem Endgerät selbst
wird die `wireguard` Clientsoftware installiert, die entsprechend der
importierten Konfiguration einen gesicherten Tunnel auf- und abbauen kann.
Die für das Interface definierten Firewall-Regeln steuern, auf welche
Dienste innerhalb des LAN's die Endgeräte Zugriff erhalten.
| | |
|:---------|:----------|
| Segment: | **VPN** |
| Service: | wireguard |
#### Netzsegment WIFI
Die Anbindung von Endgeräten über eine Funkschnittstelle erfolgt über
Access-Points. Zwischen dem WIFI-Adapter im Endgerät und dem
Access-Point wird eine verschlüsselte Verbindung
erzeugt. Kryptografischen Verfahren unterbinden, dass die Möglichkeit
eins Mitlesen, bzw. Auslesen übermittelter Datenpakete im Klartext
erfolgt (**spoofing**).
Die für das Interface definierten Firewall-Regeln steuern, welche
Kommunikationsmöglichkeiten zu anderen Netzsegmenten (Dienste, NAT)
erlaubt, bzw. geblockt werden.
| | |
|:-----------------|:-------------|
| Segment: | **WIFI** |
| Service: | routing |
| Verschlüsselung: | WPA2 |
| Band: | 2,4GHz, 5GHz |
# Referenzen
[OSI_Modell]: <https://de.wikipedia.org/wiki/OSI-Modell>
[CIDR_Notation]: <https://de.wikipedia.org/wiki/Classless_Inter-Domain_Routing>
* OSI_Modell: https://de.wikipedia.org/wiki/OSI-Modell
* CIDR_Notation: https://de.wikipedia.org/wiki/Classless_Inter-Domain_Routing

View File

@@ -0,0 +1,164 @@
# Netzwerk-Infrastruktur
## WAN Anbindung
Mit Projektbeginn wurde eine asynchrone Leitung über den Provider `NetCologne` zur Anbindung an das
Internet geschaltet.
![Modem: Telekom Speedport Smart 2](../assets/Telekom-Speedport-Smart-2.png)
Eine gewünschte Anbindung mit einer höheren Datenrate ist Standort nur
mit einem Wechsel auf eine Glasfaser Leitung möglich. Die derzeitige
Lösungsoptionen ist auf die Kupfer-Kabel getützte via VDSL/Vektoring
Technik begrenzt.
Der Anbieter NetCologne bietet zum aktuellen Stand 2025 eine
Anbindung via Telefonkabel an. Der Access-Punkt befindet sich im
Nebenhaus und wird via LAN-Kabel (CAT-6a) auf eine RJ-45 Dose im
Untergeschoß (Wand rechts neben dem Sicherungskasten) weitergeleitet.
Von hier aus erfolgt die Verlängerung in das Server-Rack über die
interne strukturierte Verkabelung.
Die Zugangs- und Vertragsdaten werden in der WERNER RI
**Keypass** Datenbank gepflegt.
| Schlüssel | Wert |
|:-----------|:---------------------------------|
| Provider: | NetCologne |
| Tarif: | VDSL 200 |
| Typ: | Asynchron mit fester IP-Addresse |
| Telefonie: | 0221/973143-0 |
| Fax: | 0221/973143-99 |
| Download: | 250 Mbit |
| Upload: | 40 Mbit |
## VLAN's
VLANs ermöglichen die Definition von virtuellen
Switching-Umgebungen. Datenpakete werden bei Aktivierung nur zwischen
denen definierten Teilnehmern des VLANs vermittelt.
* **Pro**: erhöht die Datensicherheit
Eine Abrenzung mehrerer LAN Segmente ist auf einem Switch möglich
(z.B VoIP, Arbeitsgruppe 1, ARbeitsgruppe 2).
* **Con**: explizites Management erforderlich.
Das Patchkabel im Port der stukturierten Etagenverteilung
(anzusprechender Arbeitsplatz) ist gezielt auf den vorgesehenen
Switch-Port vorzunehmen. Der Switch-Port muss explizit für die
Nutzung des VLAN konfiguriert werden. Die Komplexität steigt.
## Bonding
Mehrere Ports eines Switches können gebündelt werden. Die Bündelung ermöglicht es
1. Port-Ausfälle kompensieren
Fällt ein Port der Portgruppe aus, werden ohne Intervention die
Datenpakete folgender Kommunikationsanfragen transparent über die
verbliebenen Port des Bond übermittelt.
2. Bandbreitennutzung verbessern
Der Bond kann so konfiguriert werden, daß Verbindungsanfragen
dynamisch zwischen über die verfügbaren Ports des Bond aufgesplittet
werden. Beispiel: Stream A wird für die aktive Verbindungsdauer immer
über Port 1, Stream B immer über den Port 2 abgewickelt. Neue
Verbindungsanfragen werden je nach Last über den weniger ausgelasteten
Port geroutet. Die eingesetzten Switche sind in Bezug auf die
vermittelbare Bandbreite je Port auf die Arbeitsplatz Gegebenheiten
(`Use-Case`) optimiert.
In der aktuellen Konfiguration ist ein **Bonding** für die Ports der
Etagen-Verbindung vorgesehen (Switch 1. OG -> Netzwerkport im Server).
Die beiden 40 GBit fähigen QFS+ Ports werden auf dem Switch als Bond
betrieben (bond-40g, balance-alb). Dieser Bond ist einer Bridge
zugewiesen. Die Bridge ermöglich Datenanfragen zwischen den
Arbeitsplätzen und den Diensten der jeweiligen Serverinstanzen
(**Container**) auszutauschen.
## Switch Hardware
Eine Anbindung von Workstation im Produktion-Netzwerk von
ist mit 1Gbps RJ45 Netzwerkkarten (`Kanzlei`) realisiert. Die Switche müssen hierzu
die Portanbindung via SFP+ (`Glasfaser-Modul`) oder RJ45
(`Cad7-Kupferkabel`) bereitstellen.
Werden höhere Übertragungsgeschwindigkeiten notwending (insbesondere:
`Serveranbindung`), müssen die Switche über Portanbindung mit SFP+ Modulen
(`Glasfaser`, `Cat7-Kupferkabel`) verfügen. Diese Technik ermöglicht
Datenübertragungsraten von 10, 25, 50, oder 100Gbps.
### Switching Serverraum
Die Netz-Anbindung der im WERNER RI **Rack** (UG) installierten Server erfolgt über die
* 10Gbp Andindung zum Büro Core-Switch (Uplink)
* 10Gbp Anbindung von Produktions- und Backup Server (Cross Cabeling)
### Switching Kanzlei-Etagen
In den Büroetagen (EG, 1. OG, 2. OG) existiert eine strukturierte
CAT-7a Verkabelung die in den jeweiligen Räumen mit RJ-45 Dosen
abschließen. Die jeweilige Gegenstelle terminieren in Rack-Feldern
(1.UG).
Es werden die nachfolgend aufgeführten Switch eingesetzt.
// ![Switch: MikroTik 24-Port CRS326](../assets/microtik-crs326-24s-2q.png)
![Switch: Netgear 42port](../assets/netgear-42port.png)
* 10Gbp Andindung zum Server-Switch (Uplink CAT-7-Kabel)
* 1Gbp Anbindung der Clients (CAT-7-Kabel)
Die Zugangs- und Konfigurationsdaten
werden in der WERNER RI **Keypass** Datenbank gepflegt.
| Schlüssel | Wert |
|:-----------------|:---------------------------|
| Hersteller: | Netgear |
| Typ: | 42port+ |
| Betriebssystem | RouterOS v7.5 |
| Hostname: | switch1 |
| IP-Address: | 10.12.10.241 |
| Port-Anzahl LAN: | 42 * 1 Gbit RJ45 |
| Port-Anzahl: | 2 * SFP+ | 2 * 10 Gbit RJ45 |
## WIFI
Auf der Büroetage (1. OG) wird der in der nachfolgenden Tabelle
aufgeführte WLAN-Router eingesetzt.
![WiFi6: TP-Link(../assets/TP-Link.png)
Die Zugangs- und Konfigurationsdaten
werden in der WERNER RI **Keypass** Datenbank gepflegt.
| Schlüssel | Wert |
|:---------------|:--------------------|
| Hersteller: | TP-Link |
| Typ: | AP-4 |
| Betriebssystem |TP-Link |
| Hostname: | creature |
| IP-Address: | 10.12.10. |
| 2,4GHz SSID: | WRI-2.4GHz |
| 5GHz SSID: | WRI--5GHz |
| Gast SSID: | WRI-Guest |
## VLAN-Struktur
* VLAN **WERNER RI**
Kommunikation zwischen WERNER RI Arbeitsplätzen und Server Instanzen.
* VLAN **WIFI**
Anbindung von externen Arbeitsplätzen via Funk (**WIFI**) erfolgt über
einen Access-Punkt, der über die SID (**Gast-Netz**) lediglich die
Kommunikation ins Internet zuläßt.
Über die SID (**WRI**) können Endgeräte mit der entsprechenden
Authorisierung im WRI Lan eingebunden werden.

View File

@@ -0,0 +1,148 @@
## Server
Der physische Aufstellungsort der Server-Systeme von WERNER RI
(**Kanzlei-LAN**) befindet sich im `1. UG` in einem dedizierten 19''
Rack (**RACK**). Die Räumlichkeiten werden über eine Sicherheitstüre
gesichert.
Die Elektrische Unterverkabelung sichert die Stromzufuhr für die
Serverumgebung mit geeigneten Sicherungen ab.
### Produktions-Server
![Server: Produktion(GIGABYTE_R163-Z32)](assets/GIGABYTE_R165-z32-front.webp)
Alle Produktionsdaten werden über den Hauptserver (**GIGABYTE
R164-Z32-AAC1**) verarbeitet. Das System verfügt über dedizierte Betriebssystem
Festplatten (2* M.2) und dedizierte Festplatten für die Bewegungsdaten
(6* NVMe). Das System verfügt über 32 CPU Kerne.
Als Betriebssystem wird ausschließlich Microsoft WINDOWS Server
verwendet.
Die wichtigsten Hardware-Charakteristika sind in der nachfolgenden
Tabelle zusammengestellt.
| Identifier | Wert | MAC-Addresse |
|:----------------------------|:--------------------------------------------|-------------------|
| Hardware-Typ: | GIGABYTE R163-Z32-AAC1 | |
| Operating-System: | Windows Server | |
| Release: | 2025 (Datacenter) | |
| Hostname: | wrihv1 | |
| IPv4-Adresse (bridge-lan): | 10.12.10.1 | |
| Hostname: | wrihv1-idrac | |
| IPv4-Adresse (ipmi): | 10.12.10.91 | |
| CPU: | AMD 9032 (32* 3.4 GHz | |
| RAM (DDR5<): | 256 GB | |
Das Betriebssystem des HyperV Servers ist auf einem redundanten Paar
bestehend aus zwei Solid-State-Disks (**SSD**) instaliert.
| OS System | |
|:--------------|:--------------|
| Medium: | SSD (M.2) |
| OS-Kapazität: | 980 GB |
| Controler: | Broadcom 9002 |
| Connector: | M.2 |
| Redundancy: | Raid1 |
Das System bootet über eine UEFI Umgebung (**bootmgr**), die auf
das eigentliche Betriebssystem verweist.
Die Optionen des Boot-Loaders verweisen auf das Microsoft Betriebssystem
(**C:\**), welches die im Produktionsbetrieb aktive
Betriebsystem-Umgebung bereitstellt.
| | |
|:-----------------|:------------------------------------------|
| Medium: | NVMe (M.2) |
| Basis-Kapazität: | 980 GB |
| Controller: | Broadcom 9002 |
|Connector: | M.2 |
| Redundanz | Raid1 |
### Backup-Server (inhouse)
![Server: Backup (Dell PowerEdge R730 - 3,5'')](assets/Dell_PowerEdge_R730.png)
Für alle produktionsrelevanten Daten werden tägliche
Sicherungsjobs geplant. Diese werden in Form von Incrementellen
Backups auf den dedizierten Backup-Server übertragen (**Dell
PowerEdge R730**).
Das Backup-System ist so ausgelegt, dass bis zu 6
Sicherungsfestplatten verwaltet werden können (6 * 3,5''). Das System verfügt über
2 CPUs mit 64 GB RAM.
Die wichtigsten Hardware-Charakteristika sind in der nachfolgenden
Tabelle zusammengestellt.
| Identifier | Wert | MAC-Addresse |
|:---------------------------|:----------------------------------------|-------------------|
| Hardware-Typ: | Dell R730 | |
| Betriebssystem: | Windows Server | |
| Release: | 2025 | |
| Hostname: | wribackup | |
| IPv4-Adresse (bridge-lan): | 10.12.10.10 | |
| IPv4-Adresse (iDRAC): | 10.12.10.90 | |
| CPU: | 8* 1,6GHz | |
| RAM (DDR4): | 64 GB | |
| | |
|:--------------|:------------|
| Medium: | SATA |
| OS-Kapazität: | 250 GB |
| Anbindung: | SATA |
| Redundanz: | Raid1 |
Die `UEFI Partitionen` befinden sich redundant auf zwei
SSD-Festplaten, die nur für den Desaster-Fall mit einer Basis
Installation eines Linux-Betriebssystems ausgestattet sind. Zusätzlich
werden USB-Speichermedien gepflegt, auf denen eine Spiegelung einer
funktionsfähigen Boot-Umgebung vorliegt.
Deren Partitionen der dedizierten Betriebssystem-Festplatten sind als
redundantes Raid System (**raid1**) formatiert.
### Backup-Server (remote)
![Server: Backup (Dell PowerEdge T630 - 3,5'')](assets/Dell_PowerEdge_T630.png)
Beeing prepared against physical damange inside the production office,
schedule online backups are taken onto the the remote backup-server
(**Dell PowerEdge T630**).
## NAS Systeme
Zusätzlich zu den im Kellergeschoss aufgestellten Produktions-Servern
wird ein NAS System betrieben. Dieses System befindet sich im 1. OG
der Kanzlei von **WERNER RI**.
![NAS: Synology DWS1520+)](assets/Synology_DWS1520+.png)
Das verfügbare Backup Datenvolumen (Speicher-Pool) kann über ein
**iSCSI-target** als Netzwerkressource bereitgestellt werden
(vgl. Notfall-Umgebung). Die Authorisierung von **iSCSI-Initiatoren**
erfolgt über das CHAP Protokoll (Client-Komponente).
| NAS-Store | Synology |
|:----------------|:---------------------------------------------|
| Typ: | Synology DWS1520+ |
| Betriebssystem: | DSM |
| Release: | 7.0.1-42218 Update 3 |
| Hostname: | geoff |
| Netzwerktyp: | bridge |
| IPv4-Adresse: | 172.17.10.32 |
| iSCSI-Target: | iqn.2008-10.com.gringo-films.koeln:zfs-store |
| LUN: | zfs-store |
Das System verfügt über 4 HDD Festplatten.
| Datenhaltung | Backup-STorage |
|:-----------------|:--------------------|
| Medium: | SATA (7200 Umin) |
| Netto-Kapazität: | 16 TB (4 \* 4 TB) |
| Hotspare: | 4 TB (1 \* 4 TB) |
| Anbindung: | 3,5'' HotPlug |
| Redundanz: | Raid5 (ext4) |

View File

@@ -0,0 +1,36 @@
# Virtualisations Struktur
## Zusammenfassung
Die nachfolgende Tabelle listet die für den Produktionsbetrieb
erforderlichen `**Virtuellen Machinen**` (`VM's`). Diese
stellen alle erforderlichen Dienste bereit, die für die Verarbeitung
der in der Kanzlei erforderlichen digitalen Prozesse Verwendung finden.
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | Service |
|-------------|---------|---------------------|------|-----------|-----------|--------------------------|
| wridc | RUNNING | 10.12.10.21 (eth0) | | CONTAINER | 0 | Primary AD-Server |
| wriproxy | RUNNING | 172.17.10.33 (eth0) | | CONTAINER | 0 | 3CX Viop Server |
| wrimatrix | RUNNING | 10.12.10.34 (eth0) | | CONTAINER | 0 | 3CX Viop Server |
| wridata | RUNNING | 10.1210.31(eth0) | | CONTAINER | 0 | Publishing Service |
| wriadmin | RUNNING | 10.12.10.32 (eth0) | | CONTAINER | 0 | Nextcloud Services |
| writerm1 | RUNNING | 10.12.10.41 (eth0) | | CONTAINER | 0 | File Server |
| writerm2 | STOPPED | 10.12.10.42 (eth0) | | CONTAINER | 0 | (Obsolete) ShotGrid Sync |
| | | | | | | Build-Server |
## VM Subsystem
Alle Subsysteme im `Produktivsystem` von
`WE'RNER RI` sind so ausgelegt, dass logische Einheiten in
eigenständigen `Virtuellen Machienen` abgebildent werden.
Die Verwaltung dieser **VM's** erfolgt auf dem für die Steuerung
von parivituelisierten Umgebungen optimierte
[`HyperV Rolle`](https://microsoft.com/HyperV/).
```
Note:
Die erstellung neuer VM's erfolgt über vorhandene PowerShell Skripte.
**TODO**: Beschreibung für die Nutzung von PowerShell Skritpen (*.ps1).
```

View File

@@ -0,0 +1,23 @@
# Speicher Struktur
*** WIP ***
## Konzeption
Microsoft Storage Spaces `S2D` als Rolle innerhalb eines Single Server
Clusters.
Bereitstellung des `S2D` als redundante Speicherstruktur innerhalb des
Clusters zusammen mit der HyperV Rolle auf dem physischen Host.
* Gehäuse: GIGABYTE Host
* Pool: 6* NVMe für Bewegungsdaten
* Volumes: Bereiche für die Ablage
* Virtuellen Festplatten für Betriebssystemd der VM's
* Virtuelle Festplatten für die Bewegunsdaten
* Virtuelle Festplatten für die Bereitstellung von
Installations-Dateien

View File

@@ -0,0 +1,28 @@
# Backup and Recovery structure
Backups are essential to recover from catastrophic desasters. We are
prepared not loose any essential data!
Since we have to handle a huge amount of transactional data, the
processes have to be optimized to suite our needs:
* storage capacity with at least 120% of the production capacity
* storage capacity has to be superviced
At `GRINGO Films` we have to provide the single and authorative
instance, where all produced film production data will come together.
This data **has to be** saved in a redundant way.
* at least in 3 different systems
* at least in two diverse locations
* with synchronization methods that allows incremental snapshotting
Subsection [Inhouse Backup and Recovery](ch02-01-inhouse-backup.md)
will document the procedures taken, to provide dedundant data storage
inside the LAN of our production side.
Subsection [Remote Backup and Recovery](ch02-02-remote-backup.md) will
document the procedures taken, to asure off-side data storage that
synchonizes the data to an external location.
production side.

View File

@@ -0,0 +1,18 @@
# Backup and Recovery structure
*** WIP ***
* Backup via VEEAM Backup
# Inhouse
* Nighly: Incremental Backups
* Weekly: Full Backups
Alles auf dem Backup Store des Backup-Servers
# Extern
* VEEAM external Storage auf Windows-Server in Dellbrück
* Remote stored Backups via WAN-Accelerator
* Daily: Incremental Backups

View File

@@ -0,0 +1,10 @@
# Backup and Recovery structure
WIP: please update the given content.
Beschreibung der Infrastruktur in Dellbrück.
* Windows System
* Veeam Server Progrogramm
* WAN-Accelerator
* Storage Volume

155
src/lib.rs Normal file
View File

@@ -0,0 +1,155 @@
#![crate_name = "WRI IT-Koncept"]
#![crate_type = "lib"]
//! <img src="src/assets/Logo_WERNER_RI.png" width="40%" loading="lazy"/>
//!
//!
//! This repository contains the text source documenting the "IT-Konzept at WERNER RI".
//! We will further reference to it as the `IT-Konzept`.
//!
//! ## TLDR
//!
//! To conveniently read an online version of this documentation, please open the link to the
//! [IT-Konzept][IT-Konzept-Doc].
//!
//! [IT-Konzept-Doc]: ./doc/en/index.html
//!
//! ## Requirements
//!
//! Building the rendered text requires the program [mdBook] and its
//! helper tools. The consumed version should be ideally the same that
//! rust-lang/rust uses. Install this tools with:
//!
//! ```console
//! $ cargo install mdbook
//! ```
//!
//! This command will grep a suitable mdbook version with all needed
//! dependencies from [crates.io].
//!
//! You may extend the call with
//!
//! ```console
//! $ cargo install mdbook mdbook-linkchecker mdbook-mermaid
//! ```
//!
//! This will enable us to make use of alinkchecker to assure, that
//! the used links inside the markdown source can resolve to valid
//! targets. mkbook-mermaid is a preprocessor for mdbook to add
//! mermaid.js support. We may use it to create graphs that visualize
//! some process flows.
//!
//! [crates.io]: https://crates.io/search?q=mdbook
//!
//! ## Multilingual version of mdBook
//!
//! The documentation aims to make translations as flawless as
//! possible.
//! We are using mainline `mdbook` with following extensions from the
//! [mdBook-i18n-helpers} crate.
//!
//! This extension implements a multilingual extension, that consumes
//! The gettext / xgettext subsystem.
//!
//! As an alternative, there does exist a patch-set for version v0.4.15 that
//! adds the needed salt to organize a book as a
//! multilingual structure: All sources stored in a single hierarchical
//! code tree. This work isn't finished yet, but good enough to make
//! use of this branch for productive needs. Thank you [Nutomic
//! and Ruin0x11][mdbook localization].
//!
//! ## Cargo handled README
//!
//! The README.md file you are reading now is auto generated via the
//! [cargo-readme] crate. It resolves rust `doc comments` in
//! `src/lib.rs` and generates the parsed code to a target README.md
//! file. The installation is optional.
//!
//! [cargo-readme]: https://github.com/livioribeiro/cargo-readme
//!
//! You need it, if you make changes to `src/lib.rs` and want to
//! update or regenerate the target README like that:
//!
//! ```console
//! $ cargo install cargo-readme
//! $ cargo readme > README.md
//! ```
//!
//! [mdBook]: https://github.com/rust-lang-nursery/mdBook
//! [mdBook-i18n-helpers]: https://github.com/google/mdbook-i18n-helpers
//! [mdBook localization]: https://github.com/Ruin0x11/mdbook/tree/localization
//!
//! ## Building
//!
//! ### Building the documentation
//!
//! To build the documentation with the default language (here: 'de'), change
//! into projects root directory and type:
//!
//! ```console
//! $ mdbook build --dest-dir doc/de
//! ```
//!
//! The rendered HTML output will be placed underneath the
//! `doc/de` subdirectory. To check it out, open it in your web
//! browser.
//!
//! _Firefox:_
//! ```console
//! $ firefox doc/de/index.html # Linux
//! $ open -a "Firefox" doc/de/index.html # OS X
//! $ Start-Process "firefox.exe" .\doc\de\index.html # Windows (PowerShell)
//! $ start firefox.exe .\doc\de\index.html # Windows (Cmd)
//! ```
//!
//! _Chrome:_
//! ```console
//! $ google-chrome doc/en/index.html # Linux
//! $ open -a "Google Chrome" doc/en/index.html # OS X
//! $ Start-Process "chrome.exe" .\doc\en\index.html # Windows (PowerShell)
//! $ start chrome.exe .\doc\en\index.html # Windows (Cmd)
//! ```
//!
//! Executing `mdbook serve` will have **mdbook** act has a web service
//! which can be accessed opening the following URL: http://localhost:3000.
//!
//! To run the tests:
//!
//! ```console
//! $ mdbook test
//! ```
//!
//! ### Building a language variant of the book
//!
//! Translated version of the book will be placed inside the code tree
//! in the subdirectory `src/<language id`.
//!
//! E.g. if you like to render the english version (language id: 'en'), change
//! into documents root directory and type:
//!
//! ```console
//! $ mdbook build --dest-dir doc/en --open
//! ```
//!
//! The rendered HTML output will be placed underneath the
//! `doc/en` subdirectory. Since we appended the `--open` parameter, your default browser should be fired up and ... tada!
//!
//! ## Spellchecking
//!
//! To scan source files for spelling errors, you can use the `spellcheck.sh`
//! script. It needs a dictionary of valid words, which is provided in
//! `dictionary.txt`. If the script produces a false positive (say, you used word
//! `BTreeMap` which the script considers invalid), you need to add this word to
//! `dictionary.txt` (keep the sorted order for consistency).
//!
//! ## License
//!
//! <!-- License source -->
//! [Logo-CC_BY]: https://i.creativecommons.org/l/by/4.0/88x31.png "Creative Common Logo"
//! [License-CC_BY]: https://creativecommons.org/licenses/by/4.0/legalcode "Creative Common License"
//!
//! This work is licensed under a [Creative Common License 4.0][License-CC_BY]
//!
//! ![Creative Common Logo][Logo-CC_BY]
//!
//! © 2025 Ralf Zerres

20
src/title-page.md Normal file
View File

@@ -0,0 +1,20 @@
# Zusammenfassung
<!-- [<img src="assets/Logo_WERNER_RI.png" width="720"/>](assets/Logo_WERNER_RI.png) -->
Um diese Dokumentation online zu lesen wird eine HTML gerenderte
Version unter [IT-Konzept][it_konzept]
bereitgestellt.
Alternativ kann dies auch für die Offline-Nutzung lokal
installiert werden. Laden Sie bitte hierzu eine gerenderte `pdf` oder
`ebook` Version herunter.
Die Erstellung der Offline-Versionen aus dem Quellcode erfolgt über den
Programmaufruf:
```console
mdbook build --dest-dir doc/de/
```
[it_konzept]: https://gitea.networkx.de/WRI/IT-Konzept/doc/de

50
theme/2020-edition.css Normal file
View File

@@ -0,0 +1,50 @@
/*
Taken from the reference.
Warnings and notes:
Write the <div>s on their own line. E.g.
<div class="warning">
Warning: This is bad!
</div>
*/
main .warning p {
padding: 10px 20px;
margin: 20px 0;
}
main .warning p::before {
content: "⚠️ ";
}
.light main .warning p,
.rust main .warning p {
border: 2px solid red;
background: #ffcece;
}
.rust main .warning p {
/* overrides previous declaration */
border-color: #961717;
}
.coal main .warning p,
.navy main .warning p,
.ayu main .warning p {
background: #542626
}
/* Make the links higher contrast on dark themes */
.coal main .warning p a,
.navy main .warning p a,
.ayu main .warning p a {
color: #80d0d0
}
span.caption {
font-size: .8em;
font-weight: 600;
}
span.caption code {
font-size: 0.875em;
font-weight: 400;
}

13
tools/convert-quotes.sh Executable file
View File

@@ -0,0 +1,13 @@
#!/bin/bash
set -eu
dir=$1
mkdir -p "tmp/$dir"
for f in $dir/*.md
do
cat "$f" | cargo run --bin convert_quotes > "tmp/$f"
mv "tmp/$f" "$f"
done

20
tools/doc-to-md.sh Executable file
View File

@@ -0,0 +1,20 @@
#!/bin/bash
set -eu
# Get all the docx files in the tmp dir.
ls tmp/*.docx | \
# Extract just the filename so we can reuse it easily.
xargs -n 1 basename -s .docx | \
while IFS= read -r filename; do
# Make a directory to put the XML in.
mkdir -p "tmp/$filename"
# Unzip the docx to get at the XML.
unzip -o "tmp/$filename.docx" -d "tmp/$filename"
# Convert to markdown with XSL.
xsltproc tools/docx-to-md.xsl "tmp/$filename/word/document.xml" | \
# Hard wrap at 80 chars at word bourdaries.
fold -w 80 -s | \
# Remove trailing whitespace and save in the `nostarch` dir for comparison.
sed -e "s/ *$//" > "nostarch/$filename.md"
done

220
tools/docx-to-md.xsl Normal file
View File

@@ -0,0 +1,220 @@
<?xml version="1.0"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml">
<xsl:output method="text" />
<xsl:template match="/">
<xsl:apply-templates select="/w:document/w:body/*" />
</xsl:template>
<!-- Ignore these -->
<xsl:template match="w:p[starts-with(w:pPr/w:pStyle/@w:val, 'TOC')]" />
<xsl:template match="w:p[starts-with(w:pPr/w:pStyle/@w:val, 'Contents1')]" />
<xsl:template match="w:p[starts-with(w:pPr/w:pStyle/@w:val, 'Contents2')]" />
<xsl:template match="w:p[starts-with(w:pPr/w:pStyle/@w:val, 'Contents3')]" />
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'ChapterStart']" />
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'Normal']" />
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'Standard']" />
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'AuthorQuery']" />
<xsl:template match="w:p[w:pPr[not(w:pStyle)]]" />
<!-- Paragraph styles -->
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'ChapterTitle']">
<xsl:text>&#10;[TOC]&#10;&#10;</xsl:text>
<xsl:text># </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'HeadA']">
<xsl:text>## </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'HeadB']">
<xsl:text>### </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'HeadC']">
<xsl:text>#### </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'HeadBox']">
<xsl:text>### </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'NumListA' or @w:val = 'NumListB']]">
<xsl:text>1. </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'NumListC']]">
<xsl:text>1. </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'BulletA' or @w:val = 'BulletB' or @w:val = 'ListPlainA' or @w:val = 'ListPlainB']]">
<xsl:text>* </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'BulletC' or @w:val = 'ListPlainC']]">
<xsl:text>* </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'SubBullet']]">
<xsl:text> * </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'BodyFirst' or @w:val = 'Body' or @w:val = 'BodyFirstBox' or @w:val = 'BodyBox' or @w:val = '1stPara']]">
<xsl:if test=".//w:t">
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:if>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'CodeA' or @w:val = 'CodeAWingding']]">
<xsl:text>```&#10;</xsl:text>
<!-- Don't apply Emphasis/etc templates in code blocks -->
<xsl:for-each select="w:r">
<xsl:value-of select="w:t" />
</xsl:for-each>
<xsl:text>&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'CodeB' or @w:val = 'CodeBWingding']]">
<!-- Don't apply Emphasis/etc templates in code blocks -->
<xsl:for-each select="w:r">
<xsl:value-of select="w:t" />
</xsl:for-each>
<xsl:text>&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'CodeC' or @w:val = 'CodeCWingding']]">
<!-- Don't apply Emphasis/etc templates in code blocks -->
<xsl:for-each select="w:r">
<xsl:value-of select="w:t" />
</xsl:for-each>
<xsl:text>&#10;```&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'CodeSingle']">
<xsl:text>```&#10;</xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;```&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'ProductionDirective']">
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'Caption' or @w:val = 'TableTitle' or @w:val = 'Caption1' or @w:val = 'Listing']]">
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'BlockQuote']]">
<xsl:text>> </xsl:text>
<xsl:apply-templates select="*" />
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'BlockText']]">
<xsl:text>&#10;</xsl:text>
<xsl:text>> </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'Note']">
<xsl:text>> </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p">
Unmatched: <xsl:value-of select="w:pPr/w:pStyle/@w:val" />
<xsl:text>
</xsl:text>
</xsl:template>
<!-- Character styles -->
<xsl:template match="w:r[w:rPr/w:rStyle[@w:val = 'Literal' or @w:val = 'LiteralBold' or @w:val = 'LiteralCaption' or @w:val = 'LiteralBox']]">
<xsl:choose>
<xsl:when test="normalize-space(w:t) != ''">
<xsl:if test="starts-with(w:t, ' ')">
<xsl:text> </xsl:text>
</xsl:if>
<xsl:text>`</xsl:text>
<xsl:value-of select="normalize-space(w:t)" />
<xsl:text>`</xsl:text>
<xsl:if test="substring(w:t, string-length(w:t)) = ' '">
<xsl:text> </xsl:text>
</xsl:if>
</xsl:when>
<xsl:when test="normalize-space(w:t) != w:t and w:t != ''">
<xsl:text> </xsl:text>
</xsl:when>
</xsl:choose>
</xsl:template>
<xsl:template match="w:r[w:rPr/w:rStyle[@w:val = 'EmphasisBold']]">
<xsl:choose>
<xsl:when test="normalize-space(w:t) != ''">
<xsl:if test="starts-with(w:t, ' ')">
<xsl:text> </xsl:text>
</xsl:if>
<xsl:text>**</xsl:text>
<xsl:value-of select="normalize-space(w:t)" />
<xsl:text>**</xsl:text>
<xsl:if test="substring(w:t, string-length(w:t)) = ' '">
<xsl:text> </xsl:text>
</xsl:if>
</xsl:when>
<xsl:when test="normalize-space(w:t) != w:t and w:t != ''">
<xsl:text> </xsl:text>
</xsl:when>
</xsl:choose>
</xsl:template>
<xsl:template match="w:r[w:rPr/w:rStyle[@w:val = 'EmphasisItalic' or @w:val = 'EmphasisItalicBox' or @w:val = 'EmphasisNote' or @w:val = 'EmphasisRevCaption' or @w:val = 'EmphasisRevItal']]">
<xsl:choose>
<xsl:when test="normalize-space(w:t) != ''">
<xsl:if test="starts-with(w:t, ' ')">
<xsl:text> </xsl:text>
</xsl:if>
<xsl:text>*</xsl:text>
<xsl:value-of select="normalize-space(w:t)" />
<xsl:text>*</xsl:text>
<xsl:if test="substring(w:t, string-length(w:t)) = ' '">
<xsl:text> </xsl:text>
</xsl:if>
</xsl:when>
<xsl:otherwise>
<xsl:text> </xsl:text>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
<xsl:template match="w:r">
<xsl:value-of select="w:t" />
</xsl:template>
</xsl:stylesheet>

22
tools/megadiff.sh Executable file
View File

@@ -0,0 +1,22 @@
#!/bin/bash
set -eu
# Remove files that are never affected by rustfmt or are otherwise uninteresting
rm -rf tmp/book-before/css/ tmp/book-before/theme/ tmp/book-before/img/ tmp/book-before/*.js \
tmp/book-before/FontAwesome tmp/book-before/*.css tmp/book-before/*.png \
tmp/book-before/*.json tmp/book-before/print.html
rm -rf tmp/book-after/css/ tmp/book-after/theme/ tmp/book-after/img/ tmp/book-after/*.js \
tmp/book-after/FontAwesome tmp/book-after/*.css tmp/book-after/*.png \
tmp/book-after/*.json tmp/book-after/print.html
# Get all the html files before
ls tmp/book-before/*.html | \
# Extract just the filename so we can reuse it easily.
xargs -n 1 basename | \
while IFS= read -r filename; do
# Remove any files that are the same before and after
diff "tmp/book-before/$filename" "tmp/book-after/$filename" > /dev/null \
&& rm "tmp/book-before/$filename" "tmp/book-after/$filename"
done

View File

@@ -0,0 +1,115 @@
#[macro_use]
extern crate lazy_static;
use std::collections::BTreeMap;
use std::env;
use std::fs::{create_dir, read_dir, File};
use std::io;
use std::io::{Read, Write};
use std::path::{Path, PathBuf};
use std::process::exit;
use regex::Regex;
static PATTERNS: &'static [(&'static str, &'static str)] = &[
(r"ch(\d\d)-\d\d-.*\.md", "chapter$1.md"),
(r"appendix-(\d\d).*\.md", "appendix.md"),
];
lazy_static! {
static ref MATCHERS: Vec<(Regex, &'static str)> = {
PATTERNS
.iter()
.map(|&(expr, repl)| (Regex::new(expr).unwrap(), repl))
.collect()
};
}
fn main() {
let args: Vec<String> = env::args().collect();
if args.len() < 3 {
println!("Usage: {} <src-dir> <target-dir>", args[0]);
exit(1);
}
let source_dir = ensure_dir_exists(&args[1]).unwrap();
let target_dir = ensure_dir_exists(&args[2]).unwrap();
let mut matched_files = match_files(source_dir, target_dir);
matched_files.sort();
for (target_path, source_paths) in group_by_target(matched_files) {
concat_files(source_paths, target_path).unwrap();
}
}
fn match_files(
source_dir: &Path,
target_dir: &Path,
) -> Vec<(PathBuf, PathBuf)> {
read_dir(source_dir)
.expect("Unable to read source directory")
.filter_map(|maybe_entry| maybe_entry.ok())
.filter_map(|entry| {
let source_filename = entry.file_name();
let source_filename =
&source_filename.to_string_lossy().into_owned();
for &(ref regex, replacement) in MATCHERS.iter() {
if regex.is_match(source_filename) {
let target_filename =
regex.replace_all(source_filename, replacement);
let source_path = entry.path();
let mut target_path = PathBuf::from(&target_dir);
target_path.push(target_filename.to_string());
return Some((source_path, target_path));
}
}
None
})
.collect()
}
fn group_by_target(
matched_files: Vec<(PathBuf, PathBuf)>,
) -> BTreeMap<PathBuf, Vec<PathBuf>> {
let mut grouped: BTreeMap<PathBuf, Vec<PathBuf>> = BTreeMap::new();
for (source, target) in matched_files {
if let Some(source_paths) = grouped.get_mut(&target) {
source_paths.push(source);
continue;
}
let source_paths = vec![source];
grouped.insert(target.clone(), source_paths);
}
grouped
}
fn concat_files(
source_paths: Vec<PathBuf>,
target_path: PathBuf,
) -> io::Result<()> {
println!("Concatenating into {}:", target_path.to_string_lossy());
let mut target = File::create(target_path)?;
target.write_all(b"\n[TOC]\n")?;
for path in source_paths {
println!(" {}", path.to_string_lossy());
let mut source = File::open(path)?;
let mut contents: Vec<u8> = Vec::new();
source.read_to_end(&mut contents)?;
target.write_all(b"\n")?;
target.write_all(&contents)?;
target.write_all(b"\n")?;
}
Ok(())
}
fn ensure_dir_exists(dir_string: &str) -> io::Result<&Path> {
let path = Path::new(dir_string);
if !path.exists() {
create_dir(path)?;
}
Ok(&path)
}

View File

@@ -0,0 +1,78 @@
use std::io;
use std::io::{Read, Write};
fn main() {
let mut is_in_code_block = false;
let mut is_in_inline_code = false;
let mut is_in_html_tag = false;
let mut buffer = String::new();
if let Err(e) = io::stdin().read_to_string(&mut buffer) {
panic!("{}", e);
}
for line in buffer.lines() {
if line.is_empty() {
is_in_inline_code = false;
}
if line.starts_with("```") {
is_in_code_block = !is_in_code_block;
}
if is_in_code_block {
is_in_inline_code = false;
is_in_html_tag = false;
write!(io::stdout(), "{}\n", line).unwrap();
} else {
let modified_line = &mut String::new();
let mut previous_char = std::char::REPLACEMENT_CHARACTER;
let mut chars_in_line = line.chars();
while let Some(possible_match) = chars_in_line.next() {
// Check if inside inline code.
if possible_match == '`' {
is_in_inline_code = !is_in_inline_code;
}
// Check if inside HTML tag.
if possible_match == '<' && !is_in_inline_code {
is_in_html_tag = true;
}
if possible_match == '>' && !is_in_inline_code {
is_in_html_tag = false;
}
// Replace with right/left apostrophe/quote.
let char_to_push = if possible_match == '\''
&& !is_in_inline_code
&& !is_in_html_tag
{
if (previous_char != std::char::REPLACEMENT_CHARACTER
&& !previous_char.is_whitespace())
|| previous_char == ''
{
''
} else {
''
}
} else if possible_match == '"'
&& !is_in_inline_code
&& !is_in_html_tag
{
if (previous_char != std::char::REPLACEMENT_CHARACTER
&& !previous_char.is_whitespace())
|| previous_char == '“'
{
'”'
} else {
'“'
}
} else {
// Leave untouched.
possible_match
};
modified_line.push(char_to_push);
previous_char = char_to_push;
}
write!(io::stdout(), "{}\n", modified_line).unwrap();
}
}
}

252
tools/src/bin/lfp.rs Normal file
View File

@@ -0,0 +1,252 @@
// We have some long regex literals, so:
// ignore-tidy-linelength
use docopt::Docopt;
use serde::Deserialize;
use std::io::BufRead;
use std::{fs, io, path};
fn main() {
let args: Args = Docopt::new(USAGE)
.and_then(|d| d.deserialize())
.unwrap_or_else(|e| e.exit());
let src_dir = &path::Path::new(&args.arg_src_dir);
let found_errs = walkdir::WalkDir::new(src_dir)
.min_depth(1)
.into_iter()
.map(|entry| match entry {
Ok(entry) => entry,
Err(err) => {
eprintln!("{:?}", err);
std::process::exit(911)
}
})
.map(|entry| {
let path = entry.path();
if is_file_of_interest(path) {
let err_vec = lint_file(path);
for err in &err_vec {
match *err {
LintingError::LineOfInterest(line_num, ref line) => {
eprintln!(
"{}:{}\t{}",
path.display(),
line_num,
line
)
}
LintingError::UnableToOpenFile => {
eprintln!("Unable to open {}.", path.display())
}
}
}
!err_vec.is_empty()
} else {
false
}
})
.collect::<Vec<_>>()
.iter()
.any(|result| *result);
if found_errs {
std::process::exit(1)
} else {
std::process::exit(0)
}
}
const USAGE: &'static str = "
counter
Usage:
lfp <src-dir>
lfp (-h | --help)
Options:
-h --help Show this screen.
";
#[derive(Debug, Deserialize)]
struct Args {
arg_src_dir: String,
}
fn lint_file(path: &path::Path) -> Vec<LintingError> {
match fs::File::open(path) {
Ok(file) => lint_lines(io::BufReader::new(&file).lines()),
Err(_) => vec![LintingError::UnableToOpenFile],
}
}
fn lint_lines<I>(lines: I) -> Vec<LintingError>
where
I: Iterator<Item = io::Result<String>>,
{
lines
.enumerate()
.map(|(line_num, line)| {
let raw_line = line.unwrap();
if is_line_of_interest(&raw_line) {
Err(LintingError::LineOfInterest(line_num, raw_line))
} else {
Ok(())
}
})
.filter(|result| result.is_err())
.map(|result| result.unwrap_err())
.collect()
}
fn is_file_of_interest(path: &path::Path) -> bool {
path.extension().map_or(false, |ext| ext == "md")
}
fn is_line_of_interest(line: &str) -> bool {
!line
.split_whitespace()
.filter(|sub_string| {
sub_string.contains("file://")
&& !sub_string.contains("file:///projects/")
})
.collect::<Vec<_>>()
.is_empty()
}
#[derive(Debug)]
enum LintingError {
UnableToOpenFile,
LineOfInterest(usize, String),
}
#[cfg(test)]
mod tests {
use std::path;
#[test]
fn lint_file_returns_a_vec_with_errs_when_lines_of_interest_are_found() {
let string = r#"
$ cargo run
Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game)
Running `target/guessing_game`
Guess the number!
The secret number is: 61
Please input your guess.
10
You guessed: 10
Too small!
Please input your guess.
99
You guessed: 99
Too big!
Please input your guess.
foo
Please input your guess.
61
You guessed: 61
You win!
$ cargo run
Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game)
Running `target/debug/guessing_game`
Guess the number!
The secret number is: 7
Please input your guess.
4
You guessed: 4
$ cargo run
Running `target/debug/guessing_game`
Guess the number!
The secret number is: 83
Please input your guess.
5
$ cargo run
Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game)
Running `target/debug/guessing_game`
Hello, world!
"#;
let raw_lines = string.to_string();
let lines = raw_lines.lines().map(|line| Ok(line.to_string()));
let result_vec = super::lint_lines(lines);
assert!(!result_vec.is_empty());
assert_eq!(3, result_vec.len());
}
#[test]
fn lint_file_returns_an_empty_vec_when_no_lines_of_interest_are_found() {
let string = r#"
$ cargo run
Compiling guessing_game v0.1.0 (file:///projects/guessing_game)
Running `target/guessing_game`
Guess the number!
The secret number is: 61
Please input your guess.
10
You guessed: 10
Too small!
Please input your guess.
99
You guessed: 99
Too big!
Please input your guess.
foo
Please input your guess.
61
You guessed: 61
You win!
"#;
let raw_lines = string.to_string();
let lines = raw_lines.lines().map(|line| Ok(line.to_string()));
let result_vec = super::lint_lines(lines);
assert!(result_vec.is_empty());
}
#[test]
fn is_file_of_interest_returns_false_when_the_path_is_a_directory() {
let uninteresting_fn = "src/img";
assert!(!super::is_file_of_interest(path::Path::new(
uninteresting_fn
)));
}
#[test]
fn is_file_of_interest_returns_false_when_the_filename_does_not_have_the_md_extension(
) {
let uninteresting_fn = "src/img/foo1.png";
assert!(!super::is_file_of_interest(path::Path::new(
uninteresting_fn
)));
}
#[test]
fn is_file_of_interest_returns_true_when_the_filename_has_the_md_extension()
{
let interesting_fn = "src/ch01-00-introduction.md";
assert!(super::is_file_of_interest(path::Path::new(interesting_fn)));
}
#[test]
fn is_line_of_interest_does_not_report_a_line_if_the_line_contains_a_file_url_which_is_directly_followed_by_the_project_path(
) {
let sample_line =
"Compiling guessing_game v0.1.0 (file:///projects/guessing_game)";
assert!(!super::is_line_of_interest(sample_line));
}
#[test]
fn is_line_of_interest_reports_a_line_if_the_line_contains_a_file_url_which_is_not_directly_followed_by_the_project_path(
) {
let sample_line = "Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game)";
assert!(super::is_line_of_interest(sample_line));
}
}

415
tools/src/bin/link2print.rs Normal file
View File

@@ -0,0 +1,415 @@
// FIXME: we have some long lines that could be refactored, but it's not a big deal.
// ignore-tidy-linelength
use regex::{Captures, Regex};
use std::collections::HashMap;
use std::io;
use std::io::{Read, Write};
fn main() {
write_md(parse_links(parse_references(read_md())));
}
fn read_md() -> String {
let mut buffer = String::new();
match io::stdin().read_to_string(&mut buffer) {
Ok(_) => buffer,
Err(error) => panic!("{}", error),
}
}
fn write_md(output: String) {
write!(io::stdout(), "{}", output).unwrap();
}
fn parse_references(buffer: String) -> (String, HashMap<String, String>) {
let mut ref_map = HashMap::new();
// FIXME: currently doesn't handle "title" in following line.
let re = Regex::new(r###"(?m)\n?^ {0,3}\[([^]]+)\]:[[:blank:]]*(.*)$"###)
.unwrap();
let output = re.replace_all(&buffer, |caps: &Captures<'_>| {
let key = caps.get(1).unwrap().as_str().to_uppercase();
let val = caps.get(2).unwrap().as_str().to_string();
if ref_map.insert(key, val).is_some() {
panic!("Did not expect markdown page to have duplicate reference");
}
"".to_string()
}).to_string();
(output, ref_map)
}
fn parse_links((buffer, ref_map): (String, HashMap<String, String>)) -> String {
// FIXME: check which punctuation is allowed by spec.
let re = Regex::new(r###"(?:(?P<pre>(?:```(?:[^`]|`[^`])*`?\n```\n)|(?:[^\[]`[^`\n]+[\n]?[^`\n]*`))|(?:\[(?P<name>[^]]+)\](?:(?:\([[:blank:]]*(?P<val>[^")]*[^ ])(?:[[:blank:]]*"[^"]*")?\))|(?:\[(?P<key>[^]]*)\]))?))"###).expect("could not create regex");
let error_code =
Regex::new(r###"^E\d{4}$"###).expect("could not create regex");
let output = re.replace_all(&buffer, |caps: &Captures<'_>| {
match caps.name("pre") {
Some(pre_section) => format!("{}", pre_section.as_str()),
None => {
let name = caps.name("name").expect("could not get name").as_str();
// Really we should ignore text inside code blocks,
// this is a hack to not try to treat `#[derive()]`,
// `[profile]`, `[test]`, or `[E\d\d\d\d]` like a link.
if name.starts_with("derive(") ||
name.starts_with("profile") ||
name.starts_with("test") ||
name.starts_with("no_mangle") ||
error_code.is_match(name) {
return name.to_string()
}
let val = match caps.name("val") {
// `[name](link)`
Some(value) => value.as_str().to_string(),
None => {
match caps.name("key") {
Some(key) => {
match key.as_str() {
// `[name][]`
"" => format!("{}", ref_map.get(&name.to_uppercase()).expect(&format!("could not find url for the link text `{}`", name))),
// `[name][reference]`
_ => format!("{}", ref_map.get(&key.as_str().to_uppercase()).expect(&format!("could not find url for the link text `{}`", key.as_str()))),
}
}
// `[name]` as reference
None => format!("{}", ref_map.get(&name.to_uppercase()).expect(&format!("could not find url for the link text `{}`", name))),
}
}
};
format!("{} at *{}*", name, val)
}
}
});
output.to_string()
}
#[cfg(test)]
mod tests {
fn parse(source: String) -> String {
super::parse_links(super::parse_references(source))
}
#[test]
fn parses_inline_link() {
let source =
r"This is a [link](http://google.com) that should be expanded"
.to_string();
let target =
r"This is a link at *http://google.com* that should be expanded"
.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_multiline_links() {
let source = r"This is a [link](http://google.com) that
should appear expanded. Another [location](/here/) and [another](http://gogogo)"
.to_string();
let target = r"This is a link at *http://google.com* that
should appear expanded. Another location at */here/* and another at *http://gogogo*"
.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_reference() {
let source = r"This is a [link][theref].
[theref]: http://example.com/foo
more text"
.to_string();
let target = r"This is a link at *http://example.com/foo*.
more text"
.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_implicit_link() {
let source = r"This is an [implicit][] link.
[implicit]: /The Link/"
.to_string();
let target = r"This is an implicit at */The Link/* link.".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_refs_with_one_space_indentation() {
let source = r"This is a [link][ref]
[ref]: The link"
.to_string();
let target = r"This is a link at *The link*".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_refs_with_two_space_indentation() {
let source = r"This is a [link][ref]
[ref]: The link"
.to_string();
let target = r"This is a link at *The link*".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_refs_with_three_space_indentation() {
let source = r"This is a [link][ref]
[ref]: The link"
.to_string();
let target = r"This is a link at *The link*".to_string();
assert_eq!(parse(source), target);
}
#[test]
#[should_panic]
fn rejects_refs_with_four_space_indentation() {
let source = r"This is a [link][ref]
[ref]: The link"
.to_string();
let target = r"This is a link at *The link*".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn ignores_optional_inline_title() {
let source =
r###"This is a titled [link](http://example.com "My title")."###
.to_string();
let target =
r"This is a titled link at *http://example.com*.".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_title_with_puctuation() {
let source =
r###"[link](http://example.com "It's Title")"###.to_string();
let target = r"link at *http://example.com*".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_name_with_punctuation() {
let source = r###"[I'm here](there)"###.to_string();
let target = r###"I'm here at *there*"###.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_name_with_utf8() {
let source = r###"[users forum](the users forum)"###.to_string();
let target =
r###"users forum at *the users forum*"###.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_reference_with_punctuation() {
let source = r###"[link][the ref-ref]
[the ref-ref]:http://example.com/ref-ref"###
.to_string();
let target = r###"link at *http://example.com/ref-ref*"###.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_reference_case_insensitively() {
let source = r"[link][Ref]
[ref]: The reference"
.to_string();
let target = r"link at *The reference*".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_link_as_reference_when_reference_is_empty() {
let source = r"[link as reference][]
[link as reference]: the actual reference"
.to_string();
let target = r"link as reference at *the actual reference*".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_link_without_reference_as_reference() {
let source = r"[link] is alone
[link]: The contents"
.to_string();
let target = r"link at *The contents* is alone".to_string();
assert_eq!(parse(source), target);
}
#[test]
#[ignore]
fn parses_link_without_reference_as_reference_with_asterisks() {
let source = r"*[link]* is alone
[link]: The contents"
.to_string();
let target = r"*link* at *The contents* is alone".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn ignores_links_in_pre_sections() {
let source = r###"```toml
[package]
name = "hello_cargo"
version = "0.1.0"
authors = ["Your Name <you@example.com>"]
[dependencies]
```
"###
.to_string();
let target = source.clone();
assert_eq!(parse(source), target);
}
#[test]
fn ignores_links_in_quoted_sections() {
let source = r###"do not change `[package]`."###.to_string();
let target = source.clone();
assert_eq!(parse(source), target);
}
#[test]
fn ignores_links_in_quoted_sections_containing_newlines() {
let source = r"do not change `this [package]
is still here` [link](ref)"
.to_string();
let target = r"do not change `this [package]
is still here` link at *ref*"
.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn ignores_links_in_pre_sections_while_still_handling_links() {
let source = r###"```toml
[package]
name = "hello_cargo"
version = "0.1.0"
authors = ["Your Name <you@example.com>"]
[dependencies]
```
Another [link]
more text
[link]: http://gohere
"###
.to_string();
let target = r###"```toml
[package]
name = "hello_cargo"
version = "0.1.0"
authors = ["Your Name <you@example.com>"]
[dependencies]
```
Another link at *http://gohere*
more text
"###
.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn ignores_quotes_in_pre_sections() {
let source = r###"```bash
$ cargo build
Compiling guessing_game v0.1.0 (file:///projects/guessing_game)
src/main.rs:23:21: 23:35 error: mismatched types [E0308]
src/main.rs:23 match guess.cmp(&secret_number) {
^~~~~~~~~~~~~~
src/main.rs:23:21: 23:35 help: run `rustc --explain E0308` to see a detailed explanation
src/main.rs:23:21: 23:35 note: expected type `&std::string::String`
src/main.rs:23:21: 23:35 note: found type `&_`
error: aborting due to previous error
Could not compile `guessing_game`.
```
"###
.to_string();
let target = source.clone();
assert_eq!(parse(source), target);
}
#[test]
fn ignores_short_quotes() {
let source = r"to `1` at index `[0]` i".to_string();
let target = source.clone();
assert_eq!(parse(source), target);
}
#[test]
fn ignores_pre_sections_with_final_quote() {
let source = r###"```bash
$ cargo run
Compiling points v0.1.0 (file:///projects/points)
error: the trait bound `Point: std::fmt::Display` is not satisfied [--explain E0277]
--> src/main.rs:8:29
8 |> println!("Point 1: {}", p1);
|> ^^
<std macros>:2:27: 2:58: note: in this expansion of format_args!
<std macros>:3:1: 3:54: note: in this expansion of print! (defined in <std macros>)
src/main.rs:8:5: 8:33: note: in this expansion of println! (defined in <std macros>)
note: `Point` cannot be formatted with the default formatter; try using `:?` instead if you are using a format string
note: required by `std::fmt::Display::fmt`
```
`here` is another [link](the ref)
"###.to_string();
let target = r###"```bash
$ cargo run
Compiling points v0.1.0 (file:///projects/points)
error: the trait bound `Point: std::fmt::Display` is not satisfied [--explain E0277]
--> src/main.rs:8:29
8 |> println!("Point 1: {}", p1);
|> ^^
<std macros>:2:27: 2:58: note: in this expansion of format_args!
<std macros>:3:1: 3:54: note: in this expansion of print! (defined in <std macros>)
src/main.rs:8:5: 8:33: note: in this expansion of println! (defined in <std macros>)
note: `Point` cannot be formatted with the default formatter; try using `:?` instead if you are using a format string
note: required by `std::fmt::Display::fmt`
```
`here` is another link at *the ref*
"###.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_adam_p_cheatsheet() {
let source = r###"[I'm an inline-style link](https://www.google.com)
[I'm an inline-style link with title](https://www.google.com "Google's Homepage")
[I'm a reference-style link][Arbitrary case-insensitive reference text]
[I'm a relative reference to a repository file](../blob/master/LICENSE)
[You can use numbers for reference-style link definitions][1]
Or leave it empty and use the [link text itself][].
URLs and URLs in angle brackets will automatically get turned into links.
http://www.example.com or <http://www.example.com> and sometimes
example.com (but not on Github, for example).
Some text to show that the reference links can follow later.
[arbitrary case-insensitive reference text]: https://www.mozilla.org
[1]: http://slashdot.org
[link text itself]: http://www.reddit.com"###
.to_string();
let target = r###"I'm an inline-style link at *https://www.google.com*
I'm an inline-style link with title at *https://www.google.com*
I'm a reference-style link at *https://www.mozilla.org*
I'm a relative reference to a repository file at *../blob/master/LICENSE*
You can use numbers for reference-style link definitions at *http://slashdot.org*
Or leave it empty and use the link text itself at *http://www.reddit.com*.
URLs and URLs in angle brackets will automatically get turned into links.
http://www.example.com or <http://www.example.com> and sometimes
example.com (but not on Github, for example).
Some text to show that the reference links can follow later.
"###
.to_string();
assert_eq!(parse(source), target);
}
}

View File

@@ -0,0 +1,159 @@
#[macro_use]
extern crate lazy_static;
use regex::Regex;
use std::error::Error;
use std::fs;
use std::fs::File;
use std::io::prelude::*;
use std::io::{BufReader, BufWriter};
use std::path::{Path, PathBuf};
fn main() -> Result<(), Box<dyn Error>> {
// Get all listings from the `listings` directory
let listings_dir = Path::new("listings");
// Put the results in the `tmp/listings` directory
let out_dir = Path::new("tmp/listings");
// Clear out any existing content in `tmp/listings`
if out_dir.is_dir() {
fs::remove_dir_all(out_dir)?;
}
// Create a new, empty `tmp/listings` directory
fs::create_dir(out_dir)?;
// For each chapter in the `listings` directory,
for chapter in fs::read_dir(listings_dir)? {
let chapter = chapter?;
let chapter_path = chapter.path();
let chapter_name = chapter_path
.file_name()
.expect("Chapter should've had a name");
// Create a corresponding chapter dir in `tmp/listings`
let output_chapter_path = out_dir.join(chapter_name);
fs::create_dir(&output_chapter_path)?;
// For each listing in the chapter directory,
for listing in fs::read_dir(chapter_path)? {
let listing = listing?;
let listing_path = listing.path();
let listing_name = listing_path
.file_name()
.expect("Listing should've had a name");
// Create a corresponding listing dir in the tmp chapter dir
let output_listing_dir = output_chapter_path.join(listing_name);
fs::create_dir(&output_listing_dir)?;
// Copy all the cleaned files in the listing to the tmp directory
copy_cleaned_listing_files(listing_path, output_listing_dir)?;
}
}
// Create a compressed archive of all the listings
let tarfile = File::create("tmp/listings.tar.gz")?;
let encoder =
flate2::write::GzEncoder::new(tarfile, flate2::Compression::default());
let mut archive = tar::Builder::new(encoder);
archive.append_dir_all("listings", "tmp/listings")?;
// Assure whoever is running this that the script exiting successfully, and remind them
// where the generated file ends up
println!("Release tarball of listings in tmp/listings.tar.gz");
Ok(())
}
// Cleaned listings will not contain:
//
// - `target` directories
// - `output.txt` files used to display output in the book
// - `rustfmt-ignore` files used to signal to update-rustc.sh the listing shouldn't be formatted
// - anchor comments or snip comments
// - empty `main` functions in `lib.rs` files used to trick rustdoc
fn copy_cleaned_listing_files(
from: PathBuf,
to: PathBuf,
) -> Result<(), Box<dyn Error>> {
for item in fs::read_dir(from)? {
let item = item?;
let item_path = item.path();
let item_name =
item_path.file_name().expect("Item should've had a name");
let output_item = to.join(item_name);
if item_path.is_dir() {
// Don't copy `target` directories
if item_name != "target" {
fs::create_dir(&output_item)?;
copy_cleaned_listing_files(item_path, output_item)?;
}
} else {
// Don't copy output files or files that tell update-rustc.sh not to format
if item_name != "output.txt" && item_name != "rustfmt-ignore" {
let item_extension = item_path.extension();
if item_extension.is_some() && item_extension.unwrap() == "rs" {
copy_cleaned_rust_file(
item_name,
&item_path,
&output_item,
)?;
} else {
// Copy any non-Rust files without modification
fs::copy(item_path, output_item)?;
}
}
}
}
Ok(())
}
lazy_static! {
static ref ANCHOR_OR_SNIP_COMMENTS: Regex = Regex::new(
r"(?x)
//\s*ANCHOR:\s*[\w_-]+ # Remove all anchor comments
|
//\s*ANCHOR_END:\s*[\w_-]+ # Remove all anchor ending comments
|
//\s*--snip-- # Remove all snip comments
"
)
.unwrap();
}
lazy_static! {
static ref EMPTY_MAIN: Regex = Regex::new(r"fn main\(\) \{}").unwrap();
}
// Cleaned Rust files will not contain:
//
// - anchor comments or snip comments
// - empty `main` functions in `lib.rs` files used to trick rustdoc
fn copy_cleaned_rust_file(
item_name: &std::ffi::OsStr,
from: &PathBuf,
to: &PathBuf,
) -> Result<(), Box<dyn Error>> {
let from_buf = BufReader::new(File::open(from)?);
let mut to_buf = BufWriter::new(File::create(to)?);
for line in from_buf.lines() {
let line = line?;
if !ANCHOR_OR_SNIP_COMMENTS.is_match(&line) {
if item_name != "lib.rs" || !EMPTY_MAIN.is_match(&line) {
writeln!(&mut to_buf, "{}", line)?;
}
}
}
to_buf.flush()?;
Ok(())
}

View File

@@ -0,0 +1,83 @@
use std::io;
use std::io::prelude::*;
fn main() {
write_md(remove_hidden_lines(&read_md()));
}
fn read_md() -> String {
let mut buffer = String::new();
match io::stdin().read_to_string(&mut buffer) {
Ok(_) => buffer,
Err(error) => panic!("{}", error),
}
}
fn write_md(output: String) {
write!(io::stdout(), "{}", output).unwrap();
}
fn remove_hidden_lines(input: &str) -> String {
let mut resulting_lines = vec![];
let mut within_codeblock = false;
for line in input.lines() {
if line.starts_with("```") {
within_codeblock = !within_codeblock;
}
if !within_codeblock || (!line.starts_with("# ") && line != "#") {
resulting_lines.push(line)
}
}
resulting_lines.join("\n")
}
#[cfg(test)]
mod tests {
use crate::remove_hidden_lines;
#[test]
fn hidden_line_in_code_block_is_removed() {
let input = r#"
In this listing:
```
fn main() {
# secret
}
```
you can see that...
"#;
let output = remove_hidden_lines(input);
let desired_output = r#"
In this listing:
```
fn main() {
}
```
you can see that...
"#;
assert_eq!(output, desired_output);
}
#[test]
fn headings_arent_removed() {
let input = r#"
# Heading 1
"#;
let output = remove_hidden_lines(input);
let desired_output = r#"
# Heading 1
"#;
assert_eq!(output, desired_output);
}
}

View File

@@ -0,0 +1,45 @@
extern crate regex;
use regex::{Captures, Regex};
use std::collections::HashSet;
use std::io;
use std::io::{Read, Write};
fn main() {
let mut buffer = String::new();
if let Err(e) = io::stdin().read_to_string(&mut buffer) {
panic!("{}", e);
}
let mut refs = HashSet::new();
// Capture all links and link references.
let regex =
r"\[([^\]]+)\](?:(?:\[([^\]]+)\])|(?:\([^\)]+\)))(?i)<!--\signore\s-->";
let link_regex = Regex::new(regex).unwrap();
let first_pass = link_regex.replace_all(&buffer, |caps: &Captures<'_>| {
// Save the link reference we want to delete.
if let Some(reference) = caps.get(2) {
refs.insert(reference.as_str().to_string());
}
// Put the link title back.
caps.get(1).unwrap().as_str().to_string()
});
// Search for the references we need to delete.
let ref_regex = Regex::new(r"(?m)^\[([^\]]+)\]:\s.*\n").unwrap();
let out = ref_regex.replace_all(&first_pass, |caps: &Captures<'_>| {
let capture = caps.get(1).unwrap().to_owned();
// Check if we've marked this reference for deletion ...
if refs.contains(capture.as_str()) {
return "".to_string();
}
// ... else we put back everything we captured.
caps.get(0).unwrap().as_str().to_string()
});
write!(io::stdout(), "{}", out).unwrap();
}

View File

@@ -0,0 +1,51 @@
extern crate regex;
use regex::{Captures, Regex};
use std::io;
use std::io::{Read, Write};
fn main() {
write_md(remove_markup(read_md()));
}
fn read_md() -> String {
let mut buffer = String::new();
match io::stdin().read_to_string(&mut buffer) {
Ok(_) => buffer,
Err(error) => panic!("{}", error),
}
}
fn write_md(output: String) {
write!(io::stdout(), "{}", output).unwrap();
}
fn remove_markup(input: String) -> String {
let filename_regex =
Regex::new(r#"\A<span class="filename">(.*)</span>\z"#).unwrap();
// Captions sometimes take up multiple lines.
let caption_start_regex =
Regex::new(r#"\A<span class="caption">(.*)\z"#).unwrap();
let caption_end_regex = Regex::new(r#"(.*)</span>\z"#).unwrap();
let regexen = vec![filename_regex, caption_start_regex, caption_end_regex];
let lines: Vec<_> = input
.lines()
.flat_map(|line| {
// Remove our syntax highlighting and rustdoc markers.
if line.starts_with("```") {
Some(String::from("```"))
// Remove the span around filenames and captions.
} else {
let result =
regexen.iter().fold(line.to_string(), |result, regex| {
regex.replace_all(&result, |caps: &Captures<'_>| {
caps.get(1).unwrap().as_str().to_string()
}).to_string()
});
Some(result)
}
})
.collect();
lines.join("\n")
}

89
tools/update-rustc.sh Executable file
View File

@@ -0,0 +1,89 @@
#!/bin/bash
set -eu
# Build the book before making any changes for comparison of the output.
echo 'Building book into `tmp/book-before` before updating...'
mdbook build -d tmp/book-before
# Rustfmt all listings
echo 'Formatting all listings...'
find -s listings -name Cargo.toml -print0 | while IFS= read -r -d '' f; do
dir_to_fmt=$(dirname $f)
# There are a handful of listings we don't want to rustfmt and skipping doesn't work;
# those will have a file in their directory that explains why.
if [ ! -f "${dir_to_fmt}/rustfmt-ignore" ]; then
cd $dir_to_fmt
cargo fmt --all && true
cd - > /dev/null
fi
done
# Get listings without anchor comments in tmp by compiling a release listings artifact
echo 'Generate listings without anchor comments...'
cargo run --bin release_listings
root_dir=$(pwd)
echo 'Regenerating output...'
# For any listings where we show the output,
find -s listings -name output.txt -print0 | while IFS= read -r -d '' f; do
build_directory=$(dirname $f)
full_build_directory="${root_dir}/${build_directory}"
full_output_path="${full_build_directory}/output.txt"
tmp_build_directory="tmp/${build_directory}"
cd $tmp_build_directory
# Save the previous compile time; we're going to keep it to minimize diff
# churn
compile_time=$(sed -E -ne "s/.*Finished (dev|test) \[unoptimized \+ debuginfo] target\(s\) in ([0-9.]*).*/\2/p" ${full_output_path})
# Save the hash from the first test binary; we're going to keep it to
# minimize diff churn
test_binary_hash=$(sed -E -ne 's@.*Running [^[:space:]]+ \(target/debug/deps/[^-]*-([^\s]*)\)@\1@p' ${full_output_path} | head -n 1)
# Act like this is the first time this listing has been built
cargo clean
# Run the command in the existing output file
cargo_command=$(sed -ne "s/$ \(.*\)/\1/p" ${full_output_path})
# Clear the output file of everything except the command
echo "$ ${cargo_command}" > ${full_output_path}
# Regenerate the output and append to the output file. Turn some warnings
# off to reduce output noise, and use one test thread to get consistent
# ordering of tests in the output when the command is `cargo test`.
RUSTFLAGS="-A unused_variables -A dead_code" RUST_TEST_THREADS=1 $cargo_command >> ${full_output_path} 2>&1 || true
# Set the project file path to the projects directory plus the crate name instead of a path
# to the computer of whoever is running this
sed -i '' -E -e "s/(Compiling|Checking) ([^\)]*) v0.1.0 (.*)/\1 \2 v0.1.0 (file:\/\/\/projects\/\2)/" ${full_output_path}
# Restore the previous compile time, if there is one
if [ -n "${compile_time}" ]; then
sed -i '' -E -e "s/Finished (dev|test) \[unoptimized \+ debuginfo] target\(s\) in [0-9.]*/Finished \1 [unoptimized + debuginfo] target(s) in ${compile_time}/" ${full_output_path}
fi
# Restore the previous test binary hash, if there is one
if [ -n "${test_binary_hash}" ]; then
replacement='s@Running ([^[:space:]]+) \(target/debug/deps/([^-]*)-([^\s]*)\)@Running \1 (target/debug/deps/\2-'
replacement+="${test_binary_hash}"
replacement+=')@g'
sed -i '' -E -e "${replacement}" ${full_output_path}
fi
cd - > /dev/null
done
# Build the book after making all the changes
echo 'Building book into `tmp/book-after` after updating...'
mdbook build -d tmp/book-after
# Run the megadiff script that removes all files that are the same, leaving only files to audit
echo 'Removing tmp files that had no changes from the update...'
./tools/megadiff.sh
echo 'Done.'