Initial mdbook project files
This commit is contained in:
64
Cargo.toml
Normal file
64
Cargo.toml
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
[package]
|
||||||
|
name = "element-call-book"
|
||||||
|
version = "0.0.1"
|
||||||
|
authors = [
|
||||||
|
"Ralf Zerres <ralf.zerres@mail.de>"
|
||||||
|
]
|
||||||
|
description = "The Element-Call book"
|
||||||
|
edition = "2021"
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
docopt = "1.1.0"
|
||||||
|
flate2 = "1.0.13"
|
||||||
|
lazy_static = "1.4.0"
|
||||||
|
regex = "1.3.3"
|
||||||
|
serde = "1.0"
|
||||||
|
tar = "0.4.26"
|
||||||
|
walkdir = "2.3.1"
|
||||||
|
|
||||||
|
[build-dependencies]
|
||||||
|
cargo-readme = "3.2.0"
|
||||||
|
mdbook = "~0.4.12"
|
||||||
|
mdbook-mermaid = "0.8.3"
|
||||||
|
|
||||||
|
#mdbook-latex = "^0.1"
|
||||||
|
#md2tex = { git = "https://github.com/lbeckman314/md2tex.git", branch = "master" }
|
||||||
|
mdbook-latex = "0.1.3"
|
||||||
|
#mdbook-latex = { git = "https://github.com/lbeckman314/mdbook-latex.git", branch = "master" }
|
||||||
|
|
||||||
|
[output.html]
|
||||||
|
|
||||||
|
[output.linkcheck]
|
||||||
|
optional = true
|
||||||
|
|
||||||
|
[[bin]]
|
||||||
|
name = "concat_chapters"
|
||||||
|
path = "tools/src/bin/concat_chapters.rs"
|
||||||
|
|
||||||
|
[[bin]]
|
||||||
|
name = "convert_quotes"
|
||||||
|
path = "tools/src/bin/convert_quotes.rs"
|
||||||
|
|
||||||
|
[[bin]]
|
||||||
|
name = "lfp"
|
||||||
|
path = "tools/src/bin/lfp.rs"
|
||||||
|
|
||||||
|
[[bin]]
|
||||||
|
name = "link2print"
|
||||||
|
path = "tools/src/bin/link2print.rs"
|
||||||
|
|
||||||
|
[[bin]]
|
||||||
|
name = "release_listings"
|
||||||
|
path = "tools/src/bin/release_listings.rs"
|
||||||
|
|
||||||
|
[[bin]]
|
||||||
|
name = "remove_hidden_lines"
|
||||||
|
path = "tools/src/bin/remove_hidden_lines.rs"
|
||||||
|
|
||||||
|
[[bin]]
|
||||||
|
name = "remove_links"
|
||||||
|
path = "tools/src/bin/remove_links.rs"
|
||||||
|
|
||||||
|
[[bin]]
|
||||||
|
name = "remove_markup"
|
||||||
|
path = "tools/src/bin/remove_markup.rs"
|
||||||
205
README.md
205
README.md
@@ -1,2 +1,205 @@
|
|||||||
# element-call
|
# element-call-book
|
||||||
|
|
||||||
|
![Welcome to the element-call book.][element_call_book]
|
||||||
|
|
||||||
|
This repository contains the text source for `The Element-Call` book.
|
||||||
|
We will further reference to it as the `Element-Call Book`.
|
||||||
|
|
||||||
|
[element_call_book]: https://gitea.networkx.de/rzerres/element-call-book/src/branch/main/src/img/element-call-1.png
|
||||||
|
<!--
|
||||||
|
WIP: once it is ready to be shipped
|
||||||
|
[The book is available in dead-tree form from No Starch Press][nostarch].
|
||||||
|
|
||||||
|
[nostarch]: https://nostarch.com/
|
||||||
|
|
||||||
|
You can read the book for free online. Please see the book as shipped with
|
||||||
|
the latest [stable], or [develop] ELement-Call releases. Be aware that issues
|
||||||
|
in those versions may have been fixed in this repository already, as those
|
||||||
|
releases are updated less frequently.
|
||||||
|
|
||||||
|
[stable]: https://element.io/element-call/stable/book/
|
||||||
|
[develop]: https://ekenebt.io/element-call/book/
|
||||||
|
|
||||||
|
See the [releases] to download just the code of all the code listings that appear in the book.
|
||||||
|
|
||||||
|
[releases]: https://element.io/element-call/book/releases
|
||||||
|
-->
|
||||||
|
|
||||||
|
#### Requirements
|
||||||
|
|
||||||
|
##### mdBook
|
||||||
|
Building the book requires [mdBook] and its helper tools. The used
|
||||||
|
version should be ideally the same that rust-lang/rust uses in
|
||||||
|
[this file][rust-mdbook]. Install this tools with:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ cargo install mdbook mdbook-linkchecker mdbook-mermaid
|
||||||
|
```
|
||||||
|
|
||||||
|
This command will grep the latest mdbook version from [crates.io] in
|
||||||
|
combination with the add-on tools mdbook-linkchecker and
|
||||||
|
mdbook-mermaid. With the linkchecker we are able to asure, that
|
||||||
|
the used links inside the markdown source can resolve to valid
|
||||||
|
targets. mkbook-mermaid is a preprocessor for mdbook to add
|
||||||
|
mermaid.js support. We do use it to create graphs that visiulize
|
||||||
|
some process flows.
|
||||||
|
|
||||||
|
[crates.io]: https://crates.io/crates/cargo-readme
|
||||||
|
|
||||||
|
##### Multilingual version of mdBook
|
||||||
|
The `Element-Call` book aims to make translations as flawless as
|
||||||
|
possible. We are using v0.4.12 that will do the job. There is a
|
||||||
|
patch available that adds the needed salt to organize a book as a
|
||||||
|
multilingual structure: All sources stored in a single hirachical
|
||||||
|
code tree. This work isn't finished yet, but good enough to make
|
||||||
|
use of this branch for our productive needs. Thank you [Nutomic
|
||||||
|
and Ruin0x11][mdbook localization].
|
||||||
|
|
||||||
|
You can force the installation of a given version number
|
||||||
|
with:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
console $ cargo install mdbook --vers 0.4.12 mdbook-linkchecker mdbook-mermaid
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Cargo handled README
|
||||||
|
|
||||||
|
We do make uses of the crate [cargo-readme]. It resolves rust `doc
|
||||||
|
comments` to generate the README.md file you are reading now. Install the create
|
||||||
|
with the following command if you want to update or regenerate this README yourself.
|
||||||
|
|
||||||
|
[cargo-readme]: https://github.com/livioribeiro/cargo-readme
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ cargo install cargo-readme
|
||||||
|
```
|
||||||
|
|
||||||
|
Once the cargo-readme binary is available, you can render the
|
||||||
|
README.md. Change into the document-root and type:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ cargo readme > README.md
|
||||||
|
```
|
||||||
|
|
||||||
|
[mdBook]: https://github.com/rust-lang-nursery/mdBook
|
||||||
|
[mdBook localization]: https://github.com/Nutomic/mdBook/tree/localization
|
||||||
|
[rust-mdbook]: https://github.com/rust-lang/rust/blob/master/src/tools/rustbook/Cargo.toml
|
||||||
|
|
||||||
|
#### Building
|
||||||
|
|
||||||
|
##### Building the book
|
||||||
|
|
||||||
|
To build the book with the default language (here: 'en'), change
|
||||||
|
into the root directory of the element-call submodule and type:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ mdbook build --dest-dir book/en
|
||||||
|
```
|
||||||
|
|
||||||
|
The rendered HTML output will be placed underneath the
|
||||||
|
`book/en` subdirectory. To check it out, open it in your web
|
||||||
|
browser.
|
||||||
|
|
||||||
|
_Firefox:_
|
||||||
|
```console
|
||||||
|
$ firefox book/en/html/index.html # Linux
|
||||||
|
$ open -a "Firefox" book/en/html/index.html # OS X
|
||||||
|
$ Start-Process "firefox.exe" .\book\en\html\index.html # Windows (PowerShell)
|
||||||
|
$ start firefox.exe .\book\en\html\index.html # Windows (Cmd)
|
||||||
|
```
|
||||||
|
|
||||||
|
_Chrome:_
|
||||||
|
```console
|
||||||
|
$ google-chrome book/en/html/index.html # Linux
|
||||||
|
$ open -a "Google Chrome" book/en/html/index.html # OS X
|
||||||
|
$ Start-Process "chrome.exe" .\book\en\html\index.html # Windows (PowerShell)
|
||||||
|
$ start chrome.exe .\book\en\html\index.html # Windows (Cmd)
|
||||||
|
```
|
||||||
|
|
||||||
|
Executing `mdbook serve` will have **mdbook** act has a web service
|
||||||
|
which can be accessed opening the following URL: http://localhost:3000.
|
||||||
|
|
||||||
|
To run the tests:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ mdbook test
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Building a language variant of the book
|
||||||
|
|
||||||
|
Translated version of the book will be placed inside the code tree
|
||||||
|
in the subdirectory `src/<language id`.
|
||||||
|
|
||||||
|
E.g. if you like to render the german version (language id: 'de'), change
|
||||||
|
into Element-Call books root directory and type:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ MDBOOK_BOOK__src=src/de mdbook build --dest-dir book/de --open
|
||||||
|
```
|
||||||
|
|
||||||
|
The rendered HTML output will be placed underneath the
|
||||||
|
`book/de` subdirectory. Since we appended the `--open` parameter, your default browser should be fired up and ... tada!
|
||||||
|
|
||||||
|
#### 🛠️ Development
|
||||||
|
==================
|
||||||
|
|
||||||
|
We welcome contributions to the Element-Call book from the community!
|
||||||
|
The best place to get started is our [`guide for contributors`][contributors-guide].
|
||||||
|
|
||||||
|
This is part of our larger [`documentation][element-documentation],
|
||||||
|
which includes information for Element-Call developers and
|
||||||
|
translaters. We'd love your help! Please see
|
||||||
|
[CONTRIBUTING.md][contrib] to learn about the kinds of contributions
|
||||||
|
we're looking for.
|
||||||
|
|
||||||
|
Alongside all that, join our [developercommunity][element-call-room-matrix]
|
||||||
|
on Matrix, featuring real humans!
|
||||||
|
|
||||||
|
<!--
|
||||||
|
WIP: once it is ready to be shipped
|
||||||
|
#### Code of Conduct
|
||||||
|
|
||||||
|
We are committed to providing a friendly, safe and welcoming
|
||||||
|
environment. Read more about our policy in the [code-of-conduct][coc] page.
|
||||||
|
|
||||||
|
[coc]: https://element-hq.github.io/element-call/book/policies/code-of-conduct.md
|
||||||
|
|
||||||
|
[contrib]: https://element-hq.github.io/element-call/book/blob/main/CONTRIBUTING.md
|
||||||
|
[contributors-guide]: https://element-hq.github.io/element-call/book/latest/development/contributing_guide.html
|
||||||
|
[element-documentation]: https://element-hq.github.io/element-call/book/latest>
|
||||||
|
[element-call-room-matrix]: https://matrix.to/#/#element-call-dev:matrix.org>`_`
|
||||||
|
-->
|
||||||
|
|
||||||
|
##### Translations
|
||||||
|
|
||||||
|
We'd love help to translate the book! See the [Translations] label
|
||||||
|
to join in efforts that are currently in progress. Open a new
|
||||||
|
issue to start working on a new language! We're waiting on [mdbook
|
||||||
|
support] for multiple languages to be finalized, but feel free to
|
||||||
|
start! A [pull request] looks promising. The mainline version (we
|
||||||
|
do depend on v0.4.12) is capable to render the existing versions
|
||||||
|
where sources are installed in the intended final structure.
|
||||||
|
|
||||||
|
[Translations]: https://gitea.networkx.de/rzerres/element-call/book/issues?q=is%3Aopen+is%3Aissue+label%3ATranslations
|
||||||
|
[mdbook support]: https://github.com/rust-lang-nursery/mdBook/issues/5
|
||||||
|
[pull request]: https://github.com/rust-lang/mdBook/pull/1306
|
||||||
|
|
||||||
|
#### Spellchecking
|
||||||
|
|
||||||
|
To scan source files for spelling errors, you can use the `spellcheck.sh`
|
||||||
|
script. It needs a dictionary of valid words, which is provided in
|
||||||
|
`dictionary.txt`. If the script produces a false positive (say, you used word
|
||||||
|
`BTreeMap` which the script considers invalid), you need to add this word to
|
||||||
|
`dictionary.txt` (keep the sorted order for consistency).
|
||||||
|
|
||||||
|
#### License
|
||||||
|
|
||||||
|
<!-- License source -->
|
||||||
|
[Logo-CC_BY]: https://i.creativecommons.org/l/by/4.0/88x31.png "Creative Common Logo"
|
||||||
|
[License-CC_BY]: https://creativecommons.org/licenses/by/4.0/legalcode "Creative Common License"
|
||||||
|
|
||||||
|
This work is licensed under a [Creative Common License 4.0][License-CC_BY]
|
||||||
|
|
||||||
|
![Creative Common Logo][Logo-CC_BY]
|
||||||
|
|
||||||
|
© 2024 Ralf Zerres
|
||||||
|
|||||||
71
book.toml
Normal file
71
book.toml
Normal file
@@ -0,0 +1,71 @@
|
|||||||
|
[book]
|
||||||
|
title = "The Element-Call Book"
|
||||||
|
description = "Element Call - the world’s first decentralised voice and video conferencing solution powered entirely by Matrix!"
|
||||||
|
authors = [
|
||||||
|
"Ralf Zerres <ralf.zerres@mail.de>",
|
||||||
|
"with Contributions from the Rust Community"
|
||||||
|
]
|
||||||
|
language = "en"
|
||||||
|
|
||||||
|
[rust]
|
||||||
|
edition = "2021"
|
||||||
|
|
||||||
|
[build]
|
||||||
|
create-missing = false
|
||||||
|
|
||||||
|
[output.html]
|
||||||
|
additional-css = ["ferries.css", "theme/2020-edition.css"]
|
||||||
|
additional-js = ["ferries.js"]
|
||||||
|
#additional-js = ["ferries.js", "mermaid.min.js", "mermaid-init.js"]
|
||||||
|
cname = "element-call-book.rs"
|
||||||
|
copy-fonts = true
|
||||||
|
smart-punctuation = true
|
||||||
|
default-theme = "light"
|
||||||
|
edit-url-template = "https://gitea.networkx.de/rzerres/element-call/tree/main/book/{path}"
|
||||||
|
git-repository-url = "https://gitea.networkx.de/rzerres/element-call/tree/main/book/{path}"
|
||||||
|
git-repository-icon = "fa-github"
|
||||||
|
input-404 = "404.md"
|
||||||
|
prefered-dark-theme = "navy"
|
||||||
|
mathjax-support = true
|
||||||
|
|
||||||
|
[output.html.fold]
|
||||||
|
enable = true
|
||||||
|
level = 0
|
||||||
|
|
||||||
|
[output.html.playground]
|
||||||
|
editable = true
|
||||||
|
line-numbers = true
|
||||||
|
|
||||||
|
[output.html.redirect]
|
||||||
|
"/format/config.html" = "configuration/index.html"
|
||||||
|
|
||||||
|
[output.html.search]
|
||||||
|
limit-results = 20
|
||||||
|
use-boolean-and = true
|
||||||
|
boost-title = 2
|
||||||
|
boost-hierarchy = 2
|
||||||
|
boost-paragraph = 1
|
||||||
|
expand = true
|
||||||
|
heading-split-level = 2
|
||||||
|
|
||||||
|
[output.linkcheck]
|
||||||
|
## Should we check links on the internet? Enabling this option adds a
|
||||||
|
## non-negligible performance impact
|
||||||
|
#follow-web-links = false
|
||||||
|
|
||||||
|
## Are we allowed to link to files outside of the book's root directory? This
|
||||||
|
## may help prevent linking to sensitive files (e.g. "../../../../etc/shadow")
|
||||||
|
#traverse-parent-directories = true
|
||||||
|
|
||||||
|
[preprocessor]
|
||||||
|
|
||||||
|
[preprocessor.mermaid]
|
||||||
|
command = "mdbook-mermaid"
|
||||||
|
|
||||||
|
[language.en]
|
||||||
|
name = "English"
|
||||||
|
|
||||||
|
[language.de]
|
||||||
|
name = "Deutsch"
|
||||||
|
title = "Das Element-Call Buch"
|
||||||
|
description = "Element-Call - die weltweit erste dezentralisierte Sprach- und Video-Konferenz Lösung, die vollständing auf Matrix aufbaut."
|
||||||
33
ferries.css
Normal file
33
ferries.css
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
body.light .does_not_compile,
|
||||||
|
body.light .panics,
|
||||||
|
body.light .not_desired_behavior,
|
||||||
|
body.rust .does_not_compile,
|
||||||
|
body.rust .panics,
|
||||||
|
body.rust .not_desired_behavior {
|
||||||
|
background: #fff1f1;
|
||||||
|
}
|
||||||
|
|
||||||
|
body.coal .does_not_compile,
|
||||||
|
body.coal .panics,
|
||||||
|
body.coal .not_desired_behavior,
|
||||||
|
body.navy .does_not_compile,
|
||||||
|
body.navy .panics,
|
||||||
|
body.navy .not_desired_behavior,
|
||||||
|
body.ayu .does_not_compile,
|
||||||
|
body.ayu .panics,
|
||||||
|
body.ayu .not_desired_behavior {
|
||||||
|
background: #501f21;
|
||||||
|
}
|
||||||
|
|
||||||
|
.ferris {
|
||||||
|
position: absolute;
|
||||||
|
z-index: 99;
|
||||||
|
right: 5px;
|
||||||
|
top: 30px;
|
||||||
|
width: 10%;
|
||||||
|
height: auto;
|
||||||
|
}
|
||||||
|
|
||||||
|
.ferris-explain {
|
||||||
|
width: 100px;
|
||||||
|
}
|
||||||
51
ferries.js
Normal file
51
ferries.js
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
var ferrisTypes = [
|
||||||
|
{
|
||||||
|
attr: 'does_not_compile',
|
||||||
|
title: 'This code does not compile!'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
attr: 'panics',
|
||||||
|
title: 'This code panics!'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
attr: 'unsafe',
|
||||||
|
title: 'This code block contains unsafe code.'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
attr: 'not_desired_behavior',
|
||||||
|
title: 'This code does not produce the desired behavior.'
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
document.addEventListener('DOMContentLoaded', () => {
|
||||||
|
for (var ferrisType of ferrisTypes) {
|
||||||
|
attachFerrises(ferrisType)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
function attachFerrises (type) {
|
||||||
|
var elements = document.getElementsByClassName(type.attr)
|
||||||
|
|
||||||
|
for (var codeBlock of elements) {
|
||||||
|
var lines = codeBlock.textContent.split(/\r|\r\n|\n/).length - 1;
|
||||||
|
|
||||||
|
if (lines >= 4) {
|
||||||
|
attachFerris(codeBlock, type)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function attachFerris (element, type) {
|
||||||
|
var a = document.createElement('a')
|
||||||
|
a.setAttribute('href', 'ch00-00-introduction.html#ferris')
|
||||||
|
a.setAttribute('target', '_blank')
|
||||||
|
|
||||||
|
var img = document.createElement('img')
|
||||||
|
img.setAttribute('src', 'img/ferris/' + type.attr + '.svg')
|
||||||
|
img.setAttribute('title', type.title)
|
||||||
|
img.className = 'ferris'
|
||||||
|
|
||||||
|
a.appendChild(img)
|
||||||
|
|
||||||
|
element.parentElement.insertBefore(a, element)
|
||||||
|
}
|
||||||
1
rustfmt.toml
Normal file
1
rustfmt.toml
Normal file
@@ -0,0 +1 @@
|
|||||||
|
max_width = 80
|
||||||
3
src/404.md
Normal file
3
src/404.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
M# Document not found (404)
|
||||||
|
|
||||||
|
This URL is invalid, sorry. Try the search instead!
|
||||||
10392
src/img/element-call-structure.svg
Normal file
10392
src/img/element-call-structure.svg
Normal file
File diff suppressed because one or more lines are too long
|
After Width: | Height: | Size: 1.0 MiB |
BIN
src/img/element-call.png
Normal file
BIN
src/img/element-call.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 855 KiB |
206
src/lib.rs
Normal file
206
src/lib.rs
Normal file
@@ -0,0 +1,206 @@
|
|||||||
|
#![crate_name = "element_call_book"]
|
||||||
|
#![crate_type = "lib"]
|
||||||
|
|
||||||
|
//! ![Welcome to the element-call book.][element_call_book]
|
||||||
|
//!
|
||||||
|
//! This repository contains the text source for "The Element-Call" book.
|
||||||
|
//! We will further reference to it as the `Element-Call Book`.
|
||||||
|
//!
|
||||||
|
//! [element_call_book]: https://gitea.networkx.de/rzerres/element-call-book/src/branch/main/src/img/element-call-webui.png
|
||||||
|
//! <!--
|
||||||
|
//! WIP: once it is ready to be shipped
|
||||||
|
//! [The book is available in dead-tree form from No Starch Press][nostarch].
|
||||||
|
//!
|
||||||
|
//! [nostarch]: https://nostarch.com/
|
||||||
|
//!
|
||||||
|
//! You can read the book for free online. Please see the book as shipped with
|
||||||
|
//! the latest [stable], or [develop] ELement-Call releases. Be aware that issues
|
||||||
|
//! in those versions may have been fixed in this repository already, as those
|
||||||
|
//! releases are updated less frequently.
|
||||||
|
//!
|
||||||
|
//! [stable]: https://element.io/element-call/stable/book/
|
||||||
|
//! [develop]: https://ekenebt.io/element-call/book/
|
||||||
|
//!
|
||||||
|
//! See the [releases] to download just the code of all the code listings that appear in the book.
|
||||||
|
//!
|
||||||
|
//! [releases]: https://element.io/element-call/book/releases
|
||||||
|
//! -->
|
||||||
|
//!
|
||||||
|
//! ### Requirements
|
||||||
|
//!
|
||||||
|
//! #### mdBook
|
||||||
|
//! Building the book requires [mdBook] and its helper tools. The used
|
||||||
|
//! version should be ideally the same that rust-lang/rust uses in
|
||||||
|
//! [this file][rust-mdbook]. Install this tools with:
|
||||||
|
//!
|
||||||
|
//! ```console
|
||||||
|
//! $ cargo install mdbook mdbook-linkchecker mdbook-mermaid
|
||||||
|
//! ```
|
||||||
|
//!
|
||||||
|
//! This command will grep the latest mdbook version from [crates.io] in
|
||||||
|
//! combination with the add-on tools mdbook-linkchecker and
|
||||||
|
//! mdbook-mermaid. With the linkchecker we are able to asure, that
|
||||||
|
//! the used links inside the markdown source can resolve to valid
|
||||||
|
//! targets. mkbook-mermaid is a preprocessor for mdbook to add
|
||||||
|
//! mermaid.js support. We do use it to create graphs that visiulize
|
||||||
|
//! some process flows.
|
||||||
|
//!
|
||||||
|
//! [crates.io]: https://crates.io/crates/cargo-readme
|
||||||
|
//!
|
||||||
|
//! #### Multilingual version of mdBook
|
||||||
|
//! The `Element-Call` book aims to make translations as flawless as
|
||||||
|
//! possible. We are using v0.4.12 that will do the job. There is a
|
||||||
|
//! patch available that adds the needed salt to organize a book as a
|
||||||
|
//! multilingual structure: All sources stored in a single hirachical
|
||||||
|
//! code tree. This work isn't finished yet, but good enough to make
|
||||||
|
//! use of this branch for our productive needs. Thank you [Nutomic
|
||||||
|
//! and Ruin0x11][mdbook localization].
|
||||||
|
//!
|
||||||
|
//! You can force the installation of a given version number
|
||||||
|
//! with:
|
||||||
|
//!
|
||||||
|
//! ```rust
|
||||||
|
//! console $ cargo install mdbook --vers 0.4.12 mdbook-linkchecker mdbook-mermaid
|
||||||
|
//! ```
|
||||||
|
//!
|
||||||
|
//! #### Cargo handled README
|
||||||
|
//!
|
||||||
|
//! We do make uses of the crate [cargo-readme]. It resolves rust `doc
|
||||||
|
//! comments` to generate the README.md file you are reading now. Install the create
|
||||||
|
//! with the following command if you want to update or regenerate this README yourself.
|
||||||
|
//!
|
||||||
|
//! [cargo-readme]: https://github.com/livioribeiro/cargo-readme
|
||||||
|
//!
|
||||||
|
//! ```console
|
||||||
|
//! $ cargo install cargo-readme
|
||||||
|
//! ```
|
||||||
|
//!
|
||||||
|
//! Once the cargo-readme binary is available, you can render the
|
||||||
|
//! README.md. Change into the document-root and type:
|
||||||
|
//!
|
||||||
|
//! ```console
|
||||||
|
//! $ cargo readme > README.md
|
||||||
|
//! ```
|
||||||
|
//!
|
||||||
|
//! [mdBook]: https://github.com/rust-lang-nursery/mdBook
|
||||||
|
//! [mdBook localization]: https://github.com/Nutomic/mdBook/tree/localization
|
||||||
|
//! [rust-mdbook]: https://github.com/rust-lang/rust/blob/master/src/tools/rustbook/Cargo.toml
|
||||||
|
//!
|
||||||
|
//! ### Building
|
||||||
|
//!
|
||||||
|
//! #### Building the book
|
||||||
|
//!
|
||||||
|
//! To build the book with the default language (here: 'en'), change
|
||||||
|
//! into the root directory of the element-call submodule and type:
|
||||||
|
//!
|
||||||
|
//! ```console
|
||||||
|
//! $ mdbook build --dest-dir book/en
|
||||||
|
//! ```
|
||||||
|
//!
|
||||||
|
//! The rendered HTML output will be placed underneath the
|
||||||
|
//! `book/en` subdirectory. To check it out, open it in your web
|
||||||
|
//! browser.
|
||||||
|
//!
|
||||||
|
//! _Firefox:_
|
||||||
|
//! ```console
|
||||||
|
//! $ firefox book/en/html/index.html # Linux
|
||||||
|
//! $ open -a "Firefox" book/en/html/index.html # OS X
|
||||||
|
//! $ Start-Process "firefox.exe" .\book\en\html\index.html # Windows (PowerShell)
|
||||||
|
//! $ start firefox.exe .\book\en\html\index.html # Windows (Cmd)
|
||||||
|
//! ```
|
||||||
|
//!
|
||||||
|
//! _Chrome:_
|
||||||
|
//! ```console
|
||||||
|
//! $ google-chrome book/en/html/index.html # Linux
|
||||||
|
//! $ open -a "Google Chrome" book/en/html/index.html # OS X
|
||||||
|
//! $ Start-Process "chrome.exe" .\book\en\html\index.html # Windows (PowerShell)
|
||||||
|
//! $ start chrome.exe .\book\en\html\index.html # Windows (Cmd)
|
||||||
|
//! ```
|
||||||
|
//!
|
||||||
|
//! Executing `mdbook serve` will have **mdbook** act has a web service
|
||||||
|
//! which can be accessed opening the following URL: http://localhost:3000.
|
||||||
|
//!
|
||||||
|
//! To run the tests:
|
||||||
|
//!
|
||||||
|
//! ```console
|
||||||
|
//! $ mdbook test
|
||||||
|
//! ```
|
||||||
|
//!
|
||||||
|
//! #### Building a language variant of the book
|
||||||
|
//!
|
||||||
|
//! Translated version of the book will be placed inside the code tree
|
||||||
|
//! in the subdirectory `src/<language id`.
|
||||||
|
//!
|
||||||
|
//! E.g. if you like to render the german version (language id: 'de'), change
|
||||||
|
//! into Element-Call books root directory and type:
|
||||||
|
//!
|
||||||
|
//! ```console
|
||||||
|
//! $ MDBOOK_BOOK__src=src/de mdbook build --dest-dir book/de --open
|
||||||
|
//! ```
|
||||||
|
//!
|
||||||
|
//! The rendered HTML output will be placed underneath the
|
||||||
|
//! `book/de` subdirectory. Since we appended the `--open` parameter, your default browser should be fired up and ... tada!
|
||||||
|
//!
|
||||||
|
//! ### 🛠️ Development
|
||||||
|
//! ==================
|
||||||
|
//!
|
||||||
|
//! We welcome contributions to the Element-Call book from the community!
|
||||||
|
//! The best place to get started is our [`guide for contributors`][contributors-guide].
|
||||||
|
//!
|
||||||
|
//! This is part of our larger [`documentation][element-documentation],
|
||||||
|
//! which includes information for Element-Call developers and
|
||||||
|
//! translaters. We'd love your help! Please see
|
||||||
|
//! [CONTRIBUTING.md][contrib] to learn about the kinds of contributions
|
||||||
|
//! we're looking for.
|
||||||
|
//!
|
||||||
|
//! Alongside all that, join our [developercommunity][element-call-room-matrix]
|
||||||
|
//! on Matrix, featuring real humans!
|
||||||
|
//!
|
||||||
|
//! <!--
|
||||||
|
//! WIP: once it is ready to be shipped
|
||||||
|
//! ### Code of Conduct
|
||||||
|
//!
|
||||||
|
//! We are committed to providing a friendly, safe and welcoming
|
||||||
|
//! environment. Read more about our policy in the [code-of-conduct][coc] page.
|
||||||
|
//!
|
||||||
|
//! [coc]: https://element-hq.github.io/element-call/book/policies/code-of-conduct.md
|
||||||
|
//!
|
||||||
|
//! [contrib]: https://element-hq.github.io/element-call/book/blob/main/CONTRIBUTING.md
|
||||||
|
//! [contributors-guide]: https://element-hq.github.io/element-call/book/latest/development/contributing_guide.html
|
||||||
|
//! [element-documentation]: https://element-hq.github.io/element-call/book/latest>
|
||||||
|
//! [element-call-room-matrix]: https://matrix.to/#/#element-call-dev:matrix.org>`_`
|
||||||
|
//! -->
|
||||||
|
//!
|
||||||
|
//! #### Translations
|
||||||
|
//!
|
||||||
|
//! We'd love help to translate the book! See the [Translations] label
|
||||||
|
//! to join in efforts that are currently in progress. Open a new
|
||||||
|
//! issue to start working on a new language! We're waiting on [mdbook
|
||||||
|
//! support] for multiple languages to be finalized, but feel free to
|
||||||
|
//! start! A [pull request] looks promising. The mainline version (we
|
||||||
|
//! do depend on v0.4.12) is capable to render the existing versions
|
||||||
|
//! where sources are installed in the intended final structure.
|
||||||
|
//!
|
||||||
|
//! [Translations]: https://gitea.networkx.de/rzerres/element-call/book/issues?q=is%3Aopen+is%3Aissue+label%3ATranslations
|
||||||
|
//! [mdbook support]: https://github.com/rust-lang-nursery/mdBook/issues/5
|
||||||
|
//! [pull request]: https://github.com/rust-lang/mdBook/pull/1306
|
||||||
|
//!
|
||||||
|
//! ### Spellchecking
|
||||||
|
//!
|
||||||
|
//! To scan source files for spelling errors, you can use the `spellcheck.sh`
|
||||||
|
//! script. It needs a dictionary of valid words, which is provided in
|
||||||
|
//! `dictionary.txt`. If the script produces a false positive (say, you used word
|
||||||
|
//! `BTreeMap` which the script considers invalid), you need to add this word to
|
||||||
|
//! `dictionary.txt` (keep the sorted order for consistency).
|
||||||
|
//!
|
||||||
|
//! ### License
|
||||||
|
//!
|
||||||
|
//! <!-- License source -->
|
||||||
|
//! [Logo-CC_BY]: https://i.creativecommons.org/l/by/4.0/88x31.png "Creative Common Logo"
|
||||||
|
//! [License-CC_BY]: https://creativecommons.org/licenses/by/4.0/legalcode "Creative Common License"
|
||||||
|
//!
|
||||||
|
//! This work is licensed under a [Creative Common License 4.0][License-CC_BY]
|
||||||
|
//!
|
||||||
|
//! ![Creative Common Logo][Logo-CC_BY]
|
||||||
|
//!
|
||||||
|
//! © 2024 Ralf Zerres
|
||||||
50
theme/2020-edition.css
Normal file
50
theme/2020-edition.css
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
/*
|
||||||
|
Taken from the reference.
|
||||||
|
Warnings and notes:
|
||||||
|
Write the <div>s on their own line. E.g.
|
||||||
|
<div class="warning">
|
||||||
|
Warning: This is bad!
|
||||||
|
</div>
|
||||||
|
*/
|
||||||
|
main .warning p {
|
||||||
|
padding: 10px 20px;
|
||||||
|
margin: 20px 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
main .warning p::before {
|
||||||
|
content: "⚠️ ";
|
||||||
|
}
|
||||||
|
|
||||||
|
.light main .warning p,
|
||||||
|
.rust main .warning p {
|
||||||
|
border: 2px solid red;
|
||||||
|
background: #ffcece;
|
||||||
|
}
|
||||||
|
|
||||||
|
.rust main .warning p {
|
||||||
|
/* overrides previous declaration */
|
||||||
|
border-color: #961717;
|
||||||
|
}
|
||||||
|
|
||||||
|
.coal main .warning p,
|
||||||
|
.navy main .warning p,
|
||||||
|
.ayu main .warning p {
|
||||||
|
background: #542626
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Make the links higher contrast on dark themes */
|
||||||
|
.coal main .warning p a,
|
||||||
|
.navy main .warning p a,
|
||||||
|
.ayu main .warning p a {
|
||||||
|
color: #80d0d0
|
||||||
|
}
|
||||||
|
|
||||||
|
span.caption {
|
||||||
|
font-size: .8em;
|
||||||
|
font-weight: 600;
|
||||||
|
}
|
||||||
|
|
||||||
|
span.caption code {
|
||||||
|
font-size: 0.875em;
|
||||||
|
font-weight: 400;
|
||||||
|
}
|
||||||
13
tools/convert-quotes.sh
Executable file
13
tools/convert-quotes.sh
Executable file
@@ -0,0 +1,13 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -eu
|
||||||
|
|
||||||
|
dir=$1
|
||||||
|
|
||||||
|
mkdir -p "tmp/$dir"
|
||||||
|
|
||||||
|
for f in $dir/*.md
|
||||||
|
do
|
||||||
|
cat "$f" | cargo run --bin convert_quotes > "tmp/$f"
|
||||||
|
mv "tmp/$f" "$f"
|
||||||
|
done
|
||||||
20
tools/doc-to-md.sh
Executable file
20
tools/doc-to-md.sh
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -eu
|
||||||
|
|
||||||
|
# Get all the docx files in the tmp dir.
|
||||||
|
ls tmp/*.docx | \
|
||||||
|
# Extract just the filename so we can reuse it easily.
|
||||||
|
xargs -n 1 basename -s .docx | \
|
||||||
|
while IFS= read -r filename; do
|
||||||
|
# Make a directory to put the XML in.
|
||||||
|
mkdir -p "tmp/$filename"
|
||||||
|
# Unzip the docx to get at the XML.
|
||||||
|
unzip -o "tmp/$filename.docx" -d "tmp/$filename"
|
||||||
|
# Convert to markdown with XSL.
|
||||||
|
xsltproc tools/docx-to-md.xsl "tmp/$filename/word/document.xml" | \
|
||||||
|
# Hard wrap at 80 chars at word bourdaries.
|
||||||
|
fold -w 80 -s | \
|
||||||
|
# Remove trailing whitespace and save in the `nostarch` dir for comparison.
|
||||||
|
sed -e "s/ *$//" > "nostarch/$filename.md"
|
||||||
|
done
|
||||||
220
tools/docx-to-md.xsl
Normal file
220
tools/docx-to-md.xsl
Normal file
@@ -0,0 +1,220 @@
|
|||||||
|
<?xml version="1.0"?>
|
||||||
|
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml">
|
||||||
|
<xsl:output method="text" />
|
||||||
|
<xsl:template match="/">
|
||||||
|
<xsl:apply-templates select="/w:document/w:body/*" />
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<!-- Ignore these -->
|
||||||
|
<xsl:template match="w:p[starts-with(w:pPr/w:pStyle/@w:val, 'TOC')]" />
|
||||||
|
<xsl:template match="w:p[starts-with(w:pPr/w:pStyle/@w:val, 'Contents1')]" />
|
||||||
|
<xsl:template match="w:p[starts-with(w:pPr/w:pStyle/@w:val, 'Contents2')]" />
|
||||||
|
<xsl:template match="w:p[starts-with(w:pPr/w:pStyle/@w:val, 'Contents3')]" />
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'ChapterStart']" />
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'Normal']" />
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'Standard']" />
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'AuthorQuery']" />
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr[not(w:pStyle)]]" />
|
||||||
|
|
||||||
|
<!-- Paragraph styles -->
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'ChapterTitle']">
|
||||||
|
<xsl:text> [TOC] </xsl:text>
|
||||||
|
<xsl:text># </xsl:text>
|
||||||
|
<xsl:apply-templates select="*" />
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'HeadA']">
|
||||||
|
<xsl:text>## </xsl:text>
|
||||||
|
<xsl:apply-templates select="*" />
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'HeadB']">
|
||||||
|
<xsl:text>### </xsl:text>
|
||||||
|
<xsl:apply-templates select="*" />
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'HeadC']">
|
||||||
|
<xsl:text>#### </xsl:text>
|
||||||
|
<xsl:apply-templates select="*" />
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'HeadBox']">
|
||||||
|
<xsl:text>### </xsl:text>
|
||||||
|
<xsl:apply-templates select="*" />
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'NumListA' or @w:val = 'NumListB']]">
|
||||||
|
<xsl:text>1. </xsl:text>
|
||||||
|
<xsl:apply-templates select="*" />
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'NumListC']]">
|
||||||
|
<xsl:text>1. </xsl:text>
|
||||||
|
<xsl:apply-templates select="*" />
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'BulletA' or @w:val = 'BulletB' or @w:val = 'ListPlainA' or @w:val = 'ListPlainB']]">
|
||||||
|
<xsl:text>* </xsl:text>
|
||||||
|
<xsl:apply-templates select="*" />
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'BulletC' or @w:val = 'ListPlainC']]">
|
||||||
|
<xsl:text>* </xsl:text>
|
||||||
|
<xsl:apply-templates select="*" />
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'SubBullet']]">
|
||||||
|
<xsl:text> * </xsl:text>
|
||||||
|
<xsl:apply-templates select="*" />
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'BodyFirst' or @w:val = 'Body' or @w:val = 'BodyFirstBox' or @w:val = 'BodyBox' or @w:val = '1stPara']]">
|
||||||
|
<xsl:if test=".//w:t">
|
||||||
|
<xsl:apply-templates select="*" />
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:if>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'CodeA' or @w:val = 'CodeAWingding']]">
|
||||||
|
<xsl:text>``` </xsl:text>
|
||||||
|
<!-- Don't apply Emphasis/etc templates in code blocks -->
|
||||||
|
<xsl:for-each select="w:r">
|
||||||
|
<xsl:value-of select="w:t" />
|
||||||
|
</xsl:for-each>
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'CodeB' or @w:val = 'CodeBWingding']]">
|
||||||
|
<!-- Don't apply Emphasis/etc templates in code blocks -->
|
||||||
|
<xsl:for-each select="w:r">
|
||||||
|
<xsl:value-of select="w:t" />
|
||||||
|
</xsl:for-each>
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'CodeC' or @w:val = 'CodeCWingding']]">
|
||||||
|
<!-- Don't apply Emphasis/etc templates in code blocks -->
|
||||||
|
<xsl:for-each select="w:r">
|
||||||
|
<xsl:value-of select="w:t" />
|
||||||
|
</xsl:for-each>
|
||||||
|
<xsl:text> ``` </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'CodeSingle']">
|
||||||
|
<xsl:text>``` </xsl:text>
|
||||||
|
<xsl:apply-templates select="*" />
|
||||||
|
<xsl:text> ``` </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'ProductionDirective']">
|
||||||
|
<xsl:apply-templates select="*" />
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'Caption' or @w:val = 'TableTitle' or @w:val = 'Caption1' or @w:val = 'Listing']]">
|
||||||
|
<xsl:apply-templates select="*" />
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'BlockQuote']]">
|
||||||
|
<xsl:text>> </xsl:text>
|
||||||
|
<xsl:apply-templates select="*" />
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'BlockText']]">
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
<xsl:text>> </xsl:text>
|
||||||
|
<xsl:apply-templates select="*" />
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'Note']">
|
||||||
|
<xsl:text>> </xsl:text>
|
||||||
|
<xsl:apply-templates select="*" />
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:p">
|
||||||
|
Unmatched: <xsl:value-of select="w:pPr/w:pStyle/@w:val" />
|
||||||
|
<xsl:text>
|
||||||
|
</xsl:text>
|
||||||
|
|
||||||
|
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<!-- Character styles -->
|
||||||
|
|
||||||
|
<xsl:template match="w:r[w:rPr/w:rStyle[@w:val = 'Literal' or @w:val = 'LiteralBold' or @w:val = 'LiteralCaption' or @w:val = 'LiteralBox']]">
|
||||||
|
<xsl:choose>
|
||||||
|
<xsl:when test="normalize-space(w:t) != ''">
|
||||||
|
<xsl:if test="starts-with(w:t, ' ')">
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:if>
|
||||||
|
<xsl:text>`</xsl:text>
|
||||||
|
<xsl:value-of select="normalize-space(w:t)" />
|
||||||
|
<xsl:text>`</xsl:text>
|
||||||
|
<xsl:if test="substring(w:t, string-length(w:t)) = ' '">
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:if>
|
||||||
|
</xsl:when>
|
||||||
|
<xsl:when test="normalize-space(w:t) != w:t and w:t != ''">
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:when>
|
||||||
|
</xsl:choose>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:r[w:rPr/w:rStyle[@w:val = 'EmphasisBold']]">
|
||||||
|
<xsl:choose>
|
||||||
|
<xsl:when test="normalize-space(w:t) != ''">
|
||||||
|
<xsl:if test="starts-with(w:t, ' ')">
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:if>
|
||||||
|
<xsl:text>**</xsl:text>
|
||||||
|
<xsl:value-of select="normalize-space(w:t)" />
|
||||||
|
<xsl:text>**</xsl:text>
|
||||||
|
<xsl:if test="substring(w:t, string-length(w:t)) = ' '">
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:if>
|
||||||
|
</xsl:when>
|
||||||
|
<xsl:when test="normalize-space(w:t) != w:t and w:t != ''">
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:when>
|
||||||
|
</xsl:choose>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:r[w:rPr/w:rStyle[@w:val = 'EmphasisItalic' or @w:val = 'EmphasisItalicBox' or @w:val = 'EmphasisNote' or @w:val = 'EmphasisRevCaption' or @w:val = 'EmphasisRevItal']]">
|
||||||
|
<xsl:choose>
|
||||||
|
<xsl:when test="normalize-space(w:t) != ''">
|
||||||
|
<xsl:if test="starts-with(w:t, ' ')">
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:if>
|
||||||
|
<xsl:text>*</xsl:text>
|
||||||
|
<xsl:value-of select="normalize-space(w:t)" />
|
||||||
|
<xsl:text>*</xsl:text>
|
||||||
|
<xsl:if test="substring(w:t, string-length(w:t)) = ' '">
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:if>
|
||||||
|
</xsl:when>
|
||||||
|
<xsl:otherwise>
|
||||||
|
<xsl:text> </xsl:text>
|
||||||
|
</xsl:otherwise>
|
||||||
|
</xsl:choose>
|
||||||
|
</xsl:template>
|
||||||
|
|
||||||
|
<xsl:template match="w:r">
|
||||||
|
<xsl:value-of select="w:t" />
|
||||||
|
</xsl:template>
|
||||||
|
</xsl:stylesheet>
|
||||||
22
tools/megadiff.sh
Executable file
22
tools/megadiff.sh
Executable file
@@ -0,0 +1,22 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -eu
|
||||||
|
|
||||||
|
# Remove files that are never affected by rustfmt or are otherwise uninteresting
|
||||||
|
rm -rf tmp/book-before/css/ tmp/book-before/theme/ tmp/book-before/img/ tmp/book-before/*.js \
|
||||||
|
tmp/book-before/FontAwesome tmp/book-before/*.css tmp/book-before/*.png \
|
||||||
|
tmp/book-before/*.json tmp/book-before/print.html
|
||||||
|
|
||||||
|
rm -rf tmp/book-after/css/ tmp/book-after/theme/ tmp/book-after/img/ tmp/book-after/*.js \
|
||||||
|
tmp/book-after/FontAwesome tmp/book-after/*.css tmp/book-after/*.png \
|
||||||
|
tmp/book-after/*.json tmp/book-after/print.html
|
||||||
|
|
||||||
|
# Get all the html files before
|
||||||
|
ls tmp/book-before/*.html | \
|
||||||
|
# Extract just the filename so we can reuse it easily.
|
||||||
|
xargs -n 1 basename | \
|
||||||
|
while IFS= read -r filename; do
|
||||||
|
# Remove any files that are the same before and after
|
||||||
|
diff "tmp/book-before/$filename" "tmp/book-after/$filename" > /dev/null \
|
||||||
|
&& rm "tmp/book-before/$filename" "tmp/book-after/$filename"
|
||||||
|
done
|
||||||
27
tools/nostarch.sh
Executable file
27
tools/nostarch.sh
Executable file
@@ -0,0 +1,27 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -eu
|
||||||
|
|
||||||
|
cargo build --release
|
||||||
|
|
||||||
|
mkdir -p tmp
|
||||||
|
rm -rf tmp/*.md
|
||||||
|
rm -rf tmp/markdown
|
||||||
|
|
||||||
|
# Render the book as Markdown to include all the code listings
|
||||||
|
MDBOOK_OUTPUT__MARKDOWN=1 mdbook build -d tmp
|
||||||
|
|
||||||
|
# Get all the Markdown files
|
||||||
|
ls tmp/markdown/${1:-""}*.md | \
|
||||||
|
# Extract just the filename so we can reuse it easily.
|
||||||
|
xargs -n 1 basename | \
|
||||||
|
# Remove all links followed by `<!-- ignore -->``, then
|
||||||
|
# Change all remaining links from Markdown to italicized inline text.
|
||||||
|
while IFS= read -r filename; do
|
||||||
|
< "tmp/markdown/$filename" ./target/release/remove_links \
|
||||||
|
| ./target/release/link2print \
|
||||||
|
| ./target/release/remove_markup \
|
||||||
|
| ./target/release/remove_hidden_lines > "tmp/$filename"
|
||||||
|
done
|
||||||
|
# Concatenate the files into the `nostarch` dir.
|
||||||
|
./target/release/concat_chapters tmp nostarch
|
||||||
115
tools/src/bin/concat_chapters.rs
Normal file
115
tools/src/bin/concat_chapters.rs
Normal file
@@ -0,0 +1,115 @@
|
|||||||
|
#[macro_use]
|
||||||
|
extern crate lazy_static;
|
||||||
|
|
||||||
|
use std::collections::BTreeMap;
|
||||||
|
use std::env;
|
||||||
|
use std::fs::{create_dir, read_dir, File};
|
||||||
|
use std::io;
|
||||||
|
use std::io::{Read, Write};
|
||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
use std::process::exit;
|
||||||
|
|
||||||
|
use regex::Regex;
|
||||||
|
|
||||||
|
static PATTERNS: &'static [(&'static str, &'static str)] = &[
|
||||||
|
(r"ch(\d\d)-\d\d-.*\.md", "chapter$1.md"),
|
||||||
|
(r"appendix-(\d\d).*\.md", "appendix.md"),
|
||||||
|
];
|
||||||
|
|
||||||
|
lazy_static! {
|
||||||
|
static ref MATCHERS: Vec<(Regex, &'static str)> = {
|
||||||
|
PATTERNS
|
||||||
|
.iter()
|
||||||
|
.map(|&(expr, repl)| (Regex::new(expr).unwrap(), repl))
|
||||||
|
.collect()
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
fn main() {
|
||||||
|
let args: Vec<String> = env::args().collect();
|
||||||
|
|
||||||
|
if args.len() < 3 {
|
||||||
|
println!("Usage: {} <src-dir> <target-dir>", args[0]);
|
||||||
|
exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
let source_dir = ensure_dir_exists(&args[1]).unwrap();
|
||||||
|
let target_dir = ensure_dir_exists(&args[2]).unwrap();
|
||||||
|
|
||||||
|
let mut matched_files = match_files(source_dir, target_dir);
|
||||||
|
matched_files.sort();
|
||||||
|
|
||||||
|
for (target_path, source_paths) in group_by_target(matched_files) {
|
||||||
|
concat_files(source_paths, target_path).unwrap();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn match_files(
|
||||||
|
source_dir: &Path,
|
||||||
|
target_dir: &Path,
|
||||||
|
) -> Vec<(PathBuf, PathBuf)> {
|
||||||
|
read_dir(source_dir)
|
||||||
|
.expect("Unable to read source directory")
|
||||||
|
.filter_map(|maybe_entry| maybe_entry.ok())
|
||||||
|
.filter_map(|entry| {
|
||||||
|
let source_filename = entry.file_name();
|
||||||
|
let source_filename =
|
||||||
|
&source_filename.to_string_lossy().into_owned();
|
||||||
|
for &(ref regex, replacement) in MATCHERS.iter() {
|
||||||
|
if regex.is_match(source_filename) {
|
||||||
|
let target_filename =
|
||||||
|
regex.replace_all(source_filename, replacement);
|
||||||
|
let source_path = entry.path();
|
||||||
|
let mut target_path = PathBuf::from(&target_dir);
|
||||||
|
target_path.push(target_filename.to_string());
|
||||||
|
return Some((source_path, target_path));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
None
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn group_by_target(
|
||||||
|
matched_files: Vec<(PathBuf, PathBuf)>,
|
||||||
|
) -> BTreeMap<PathBuf, Vec<PathBuf>> {
|
||||||
|
let mut grouped: BTreeMap<PathBuf, Vec<PathBuf>> = BTreeMap::new();
|
||||||
|
for (source, target) in matched_files {
|
||||||
|
if let Some(source_paths) = grouped.get_mut(&target) {
|
||||||
|
source_paths.push(source);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
let source_paths = vec![source];
|
||||||
|
grouped.insert(target.clone(), source_paths);
|
||||||
|
}
|
||||||
|
grouped
|
||||||
|
}
|
||||||
|
|
||||||
|
fn concat_files(
|
||||||
|
source_paths: Vec<PathBuf>,
|
||||||
|
target_path: PathBuf,
|
||||||
|
) -> io::Result<()> {
|
||||||
|
println!("Concatenating into {}:", target_path.to_string_lossy());
|
||||||
|
let mut target = File::create(target_path)?;
|
||||||
|
target.write_all(b"\n[TOC]\n")?;
|
||||||
|
|
||||||
|
for path in source_paths {
|
||||||
|
println!(" {}", path.to_string_lossy());
|
||||||
|
let mut source = File::open(path)?;
|
||||||
|
let mut contents: Vec<u8> = Vec::new();
|
||||||
|
source.read_to_end(&mut contents)?;
|
||||||
|
|
||||||
|
target.write_all(b"\n")?;
|
||||||
|
target.write_all(&contents)?;
|
||||||
|
target.write_all(b"\n")?;
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn ensure_dir_exists(dir_string: &str) -> io::Result<&Path> {
|
||||||
|
let path = Path::new(dir_string);
|
||||||
|
if !path.exists() {
|
||||||
|
create_dir(path)?;
|
||||||
|
}
|
||||||
|
Ok(&path)
|
||||||
|
}
|
||||||
78
tools/src/bin/convert_quotes.rs
Normal file
78
tools/src/bin/convert_quotes.rs
Normal file
@@ -0,0 +1,78 @@
|
|||||||
|
use std::io;
|
||||||
|
use std::io::{Read, Write};
|
||||||
|
|
||||||
|
fn main() {
|
||||||
|
let mut is_in_code_block = false;
|
||||||
|
let mut is_in_inline_code = false;
|
||||||
|
let mut is_in_html_tag = false;
|
||||||
|
|
||||||
|
let mut buffer = String::new();
|
||||||
|
if let Err(e) = io::stdin().read_to_string(&mut buffer) {
|
||||||
|
panic!("{}", e);
|
||||||
|
}
|
||||||
|
|
||||||
|
for line in buffer.lines() {
|
||||||
|
if line.is_empty() {
|
||||||
|
is_in_inline_code = false;
|
||||||
|
}
|
||||||
|
if line.starts_with("```") {
|
||||||
|
is_in_code_block = !is_in_code_block;
|
||||||
|
}
|
||||||
|
if is_in_code_block {
|
||||||
|
is_in_inline_code = false;
|
||||||
|
is_in_html_tag = false;
|
||||||
|
write!(io::stdout(), "{}\n", line).unwrap();
|
||||||
|
} else {
|
||||||
|
let modified_line = &mut String::new();
|
||||||
|
let mut previous_char = std::char::REPLACEMENT_CHARACTER;
|
||||||
|
let mut chars_in_line = line.chars();
|
||||||
|
|
||||||
|
while let Some(possible_match) = chars_in_line.next() {
|
||||||
|
// Check if inside inline code.
|
||||||
|
if possible_match == '`' {
|
||||||
|
is_in_inline_code = !is_in_inline_code;
|
||||||
|
}
|
||||||
|
// Check if inside HTML tag.
|
||||||
|
if possible_match == '<' && !is_in_inline_code {
|
||||||
|
is_in_html_tag = true;
|
||||||
|
}
|
||||||
|
if possible_match == '>' && !is_in_inline_code {
|
||||||
|
is_in_html_tag = false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Replace with right/left apostrophe/quote.
|
||||||
|
let char_to_push = if possible_match == '\''
|
||||||
|
&& !is_in_inline_code
|
||||||
|
&& !is_in_html_tag
|
||||||
|
{
|
||||||
|
if (previous_char != std::char::REPLACEMENT_CHARACTER
|
||||||
|
&& !previous_char.is_whitespace())
|
||||||
|
|| previous_char == '‘'
|
||||||
|
{
|
||||||
|
'’'
|
||||||
|
} else {
|
||||||
|
'‘'
|
||||||
|
}
|
||||||
|
} else if possible_match == '"'
|
||||||
|
&& !is_in_inline_code
|
||||||
|
&& !is_in_html_tag
|
||||||
|
{
|
||||||
|
if (previous_char != std::char::REPLACEMENT_CHARACTER
|
||||||
|
&& !previous_char.is_whitespace())
|
||||||
|
|| previous_char == '“'
|
||||||
|
{
|
||||||
|
'”'
|
||||||
|
} else {
|
||||||
|
'“'
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Leave untouched.
|
||||||
|
possible_match
|
||||||
|
};
|
||||||
|
modified_line.push(char_to_push);
|
||||||
|
previous_char = char_to_push;
|
||||||
|
}
|
||||||
|
write!(io::stdout(), "{}\n", modified_line).unwrap();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
252
tools/src/bin/lfp.rs
Normal file
252
tools/src/bin/lfp.rs
Normal file
@@ -0,0 +1,252 @@
|
|||||||
|
// We have some long regex literals, so:
|
||||||
|
// ignore-tidy-linelength
|
||||||
|
|
||||||
|
use docopt::Docopt;
|
||||||
|
use serde::Deserialize;
|
||||||
|
use std::io::BufRead;
|
||||||
|
use std::{fs, io, path};
|
||||||
|
|
||||||
|
fn main() {
|
||||||
|
let args: Args = Docopt::new(USAGE)
|
||||||
|
.and_then(|d| d.deserialize())
|
||||||
|
.unwrap_or_else(|e| e.exit());
|
||||||
|
|
||||||
|
let src_dir = &path::Path::new(&args.arg_src_dir);
|
||||||
|
let found_errs = walkdir::WalkDir::new(src_dir)
|
||||||
|
.min_depth(1)
|
||||||
|
.into_iter()
|
||||||
|
.map(|entry| match entry {
|
||||||
|
Ok(entry) => entry,
|
||||||
|
Err(err) => {
|
||||||
|
eprintln!("{:?}", err);
|
||||||
|
std::process::exit(911)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.map(|entry| {
|
||||||
|
let path = entry.path();
|
||||||
|
if is_file_of_interest(path) {
|
||||||
|
let err_vec = lint_file(path);
|
||||||
|
for err in &err_vec {
|
||||||
|
match *err {
|
||||||
|
LintingError::LineOfInterest(line_num, ref line) => {
|
||||||
|
eprintln!(
|
||||||
|
"{}:{}\t{}",
|
||||||
|
path.display(),
|
||||||
|
line_num,
|
||||||
|
line
|
||||||
|
)
|
||||||
|
}
|
||||||
|
LintingError::UnableToOpenFile => {
|
||||||
|
eprintln!("Unable to open {}.", path.display())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
!err_vec.is_empty()
|
||||||
|
} else {
|
||||||
|
false
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.collect::<Vec<_>>()
|
||||||
|
.iter()
|
||||||
|
.any(|result| *result);
|
||||||
|
|
||||||
|
if found_errs {
|
||||||
|
std::process::exit(1)
|
||||||
|
} else {
|
||||||
|
std::process::exit(0)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const USAGE: &'static str = "
|
||||||
|
counter
|
||||||
|
Usage:
|
||||||
|
lfp <src-dir>
|
||||||
|
lfp (-h | --help)
|
||||||
|
Options:
|
||||||
|
-h --help Show this screen.
|
||||||
|
";
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
struct Args {
|
||||||
|
arg_src_dir: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn lint_file(path: &path::Path) -> Vec<LintingError> {
|
||||||
|
match fs::File::open(path) {
|
||||||
|
Ok(file) => lint_lines(io::BufReader::new(&file).lines()),
|
||||||
|
Err(_) => vec![LintingError::UnableToOpenFile],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn lint_lines<I>(lines: I) -> Vec<LintingError>
|
||||||
|
where
|
||||||
|
I: Iterator<Item = io::Result<String>>,
|
||||||
|
{
|
||||||
|
lines
|
||||||
|
.enumerate()
|
||||||
|
.map(|(line_num, line)| {
|
||||||
|
let raw_line = line.unwrap();
|
||||||
|
if is_line_of_interest(&raw_line) {
|
||||||
|
Err(LintingError::LineOfInterest(line_num, raw_line))
|
||||||
|
} else {
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.filter(|result| result.is_err())
|
||||||
|
.map(|result| result.unwrap_err())
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn is_file_of_interest(path: &path::Path) -> bool {
|
||||||
|
path.extension().map_or(false, |ext| ext == "md")
|
||||||
|
}
|
||||||
|
|
||||||
|
fn is_line_of_interest(line: &str) -> bool {
|
||||||
|
!line
|
||||||
|
.split_whitespace()
|
||||||
|
.filter(|sub_string| {
|
||||||
|
sub_string.contains("file://")
|
||||||
|
&& !sub_string.contains("file:///projects/")
|
||||||
|
})
|
||||||
|
.collect::<Vec<_>>()
|
||||||
|
.is_empty()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug)]
|
||||||
|
enum LintingError {
|
||||||
|
UnableToOpenFile,
|
||||||
|
LineOfInterest(usize, String),
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
|
||||||
|
use std::path;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lint_file_returns_a_vec_with_errs_when_lines_of_interest_are_found() {
|
||||||
|
let string = r#"
|
||||||
|
$ cargo run
|
||||||
|
Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game)
|
||||||
|
Running `target/guessing_game`
|
||||||
|
Guess the number!
|
||||||
|
The secret number is: 61
|
||||||
|
Please input your guess.
|
||||||
|
10
|
||||||
|
You guessed: 10
|
||||||
|
Too small!
|
||||||
|
Please input your guess.
|
||||||
|
99
|
||||||
|
You guessed: 99
|
||||||
|
Too big!
|
||||||
|
Please input your guess.
|
||||||
|
foo
|
||||||
|
Please input your guess.
|
||||||
|
61
|
||||||
|
You guessed: 61
|
||||||
|
You win!
|
||||||
|
$ cargo run
|
||||||
|
Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game)
|
||||||
|
Running `target/debug/guessing_game`
|
||||||
|
Guess the number!
|
||||||
|
The secret number is: 7
|
||||||
|
Please input your guess.
|
||||||
|
4
|
||||||
|
You guessed: 4
|
||||||
|
$ cargo run
|
||||||
|
Running `target/debug/guessing_game`
|
||||||
|
Guess the number!
|
||||||
|
The secret number is: 83
|
||||||
|
Please input your guess.
|
||||||
|
5
|
||||||
|
$ cargo run
|
||||||
|
Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game)
|
||||||
|
Running `target/debug/guessing_game`
|
||||||
|
Hello, world!
|
||||||
|
"#;
|
||||||
|
|
||||||
|
let raw_lines = string.to_string();
|
||||||
|
let lines = raw_lines.lines().map(|line| Ok(line.to_string()));
|
||||||
|
|
||||||
|
let result_vec = super::lint_lines(lines);
|
||||||
|
|
||||||
|
assert!(!result_vec.is_empty());
|
||||||
|
assert_eq!(3, result_vec.len());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lint_file_returns_an_empty_vec_when_no_lines_of_interest_are_found() {
|
||||||
|
let string = r#"
|
||||||
|
$ cargo run
|
||||||
|
Compiling guessing_game v0.1.0 (file:///projects/guessing_game)
|
||||||
|
Running `target/guessing_game`
|
||||||
|
Guess the number!
|
||||||
|
The secret number is: 61
|
||||||
|
Please input your guess.
|
||||||
|
10
|
||||||
|
You guessed: 10
|
||||||
|
Too small!
|
||||||
|
Please input your guess.
|
||||||
|
99
|
||||||
|
You guessed: 99
|
||||||
|
Too big!
|
||||||
|
Please input your guess.
|
||||||
|
foo
|
||||||
|
Please input your guess.
|
||||||
|
61
|
||||||
|
You guessed: 61
|
||||||
|
You win!
|
||||||
|
"#;
|
||||||
|
|
||||||
|
let raw_lines = string.to_string();
|
||||||
|
let lines = raw_lines.lines().map(|line| Ok(line.to_string()));
|
||||||
|
|
||||||
|
let result_vec = super::lint_lines(lines);
|
||||||
|
|
||||||
|
assert!(result_vec.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn is_file_of_interest_returns_false_when_the_path_is_a_directory() {
|
||||||
|
let uninteresting_fn = "src/img";
|
||||||
|
|
||||||
|
assert!(!super::is_file_of_interest(path::Path::new(
|
||||||
|
uninteresting_fn
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn is_file_of_interest_returns_false_when_the_filename_does_not_have_the_md_extension(
|
||||||
|
) {
|
||||||
|
let uninteresting_fn = "src/img/foo1.png";
|
||||||
|
|
||||||
|
assert!(!super::is_file_of_interest(path::Path::new(
|
||||||
|
uninteresting_fn
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn is_file_of_interest_returns_true_when_the_filename_has_the_md_extension()
|
||||||
|
{
|
||||||
|
let interesting_fn = "src/ch01-00-introduction.md";
|
||||||
|
|
||||||
|
assert!(super::is_file_of_interest(path::Path::new(interesting_fn)));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn is_line_of_interest_does_not_report_a_line_if_the_line_contains_a_file_url_which_is_directly_followed_by_the_project_path(
|
||||||
|
) {
|
||||||
|
let sample_line =
|
||||||
|
"Compiling guessing_game v0.1.0 (file:///projects/guessing_game)";
|
||||||
|
|
||||||
|
assert!(!super::is_line_of_interest(sample_line));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn is_line_of_interest_reports_a_line_if_the_line_contains_a_file_url_which_is_not_directly_followed_by_the_project_path(
|
||||||
|
) {
|
||||||
|
let sample_line = "Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game)";
|
||||||
|
|
||||||
|
assert!(super::is_line_of_interest(sample_line));
|
||||||
|
}
|
||||||
|
}
|
||||||
415
tools/src/bin/link2print.rs
Normal file
415
tools/src/bin/link2print.rs
Normal file
@@ -0,0 +1,415 @@
|
|||||||
|
// FIXME: we have some long lines that could be refactored, but it's not a big deal.
|
||||||
|
// ignore-tidy-linelength
|
||||||
|
|
||||||
|
use regex::{Captures, Regex};
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use std::io;
|
||||||
|
use std::io::{Read, Write};
|
||||||
|
|
||||||
|
fn main() {
|
||||||
|
write_md(parse_links(parse_references(read_md())));
|
||||||
|
}
|
||||||
|
|
||||||
|
fn read_md() -> String {
|
||||||
|
let mut buffer = String::new();
|
||||||
|
match io::stdin().read_to_string(&mut buffer) {
|
||||||
|
Ok(_) => buffer,
|
||||||
|
Err(error) => panic!("{}", error),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn write_md(output: String) {
|
||||||
|
write!(io::stdout(), "{}", output).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse_references(buffer: String) -> (String, HashMap<String, String>) {
|
||||||
|
let mut ref_map = HashMap::new();
|
||||||
|
// FIXME: currently doesn't handle "title" in following line.
|
||||||
|
let re = Regex::new(r###"(?m)\n?^ {0,3}\[([^]]+)\]:[[:blank:]]*(.*)$"###)
|
||||||
|
.unwrap();
|
||||||
|
let output = re.replace_all(&buffer, |caps: &Captures<'_>| {
|
||||||
|
let key = caps.get(1).unwrap().as_str().to_uppercase();
|
||||||
|
let val = caps.get(2).unwrap().as_str().to_string();
|
||||||
|
if ref_map.insert(key, val).is_some() {
|
||||||
|
panic!("Did not expect markdown page to have duplicate reference");
|
||||||
|
}
|
||||||
|
"".to_string()
|
||||||
|
}).to_string();
|
||||||
|
(output, ref_map)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse_links((buffer, ref_map): (String, HashMap<String, String>)) -> String {
|
||||||
|
// FIXME: check which punctuation is allowed by spec.
|
||||||
|
let re = Regex::new(r###"(?:(?P<pre>(?:```(?:[^`]|`[^`])*`?\n```\n)|(?:[^\[]`[^`\n]+[\n]?[^`\n]*`))|(?:\[(?P<name>[^]]+)\](?:(?:\([[:blank:]]*(?P<val>[^")]*[^ ])(?:[[:blank:]]*"[^"]*")?\))|(?:\[(?P<key>[^]]*)\]))?))"###).expect("could not create regex");
|
||||||
|
let error_code =
|
||||||
|
Regex::new(r###"^E\d{4}$"###).expect("could not create regex");
|
||||||
|
let output = re.replace_all(&buffer, |caps: &Captures<'_>| {
|
||||||
|
match caps.name("pre") {
|
||||||
|
Some(pre_section) => format!("{}", pre_section.as_str()),
|
||||||
|
None => {
|
||||||
|
let name = caps.name("name").expect("could not get name").as_str();
|
||||||
|
// Really we should ignore text inside code blocks,
|
||||||
|
// this is a hack to not try to treat `#[derive()]`,
|
||||||
|
// `[profile]`, `[test]`, or `[E\d\d\d\d]` like a link.
|
||||||
|
if name.starts_with("derive(") ||
|
||||||
|
name.starts_with("profile") ||
|
||||||
|
name.starts_with("test") ||
|
||||||
|
name.starts_with("no_mangle") ||
|
||||||
|
error_code.is_match(name) {
|
||||||
|
return name.to_string()
|
||||||
|
}
|
||||||
|
|
||||||
|
let val = match caps.name("val") {
|
||||||
|
// `[name](link)`
|
||||||
|
Some(value) => value.as_str().to_string(),
|
||||||
|
None => {
|
||||||
|
match caps.name("key") {
|
||||||
|
Some(key) => {
|
||||||
|
match key.as_str() {
|
||||||
|
// `[name][]`
|
||||||
|
"" => format!("{}", ref_map.get(&name.to_uppercase()).expect(&format!("could not find url for the link text `{}`", name))),
|
||||||
|
// `[name][reference]`
|
||||||
|
_ => format!("{}", ref_map.get(&key.as_str().to_uppercase()).expect(&format!("could not find url for the link text `{}`", key.as_str()))),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// `[name]` as reference
|
||||||
|
None => format!("{}", ref_map.get(&name.to_uppercase()).expect(&format!("could not find url for the link text `{}`", name))),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
format!("{} at *{}*", name, val)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
output.to_string()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
fn parse(source: String) -> String {
|
||||||
|
super::parse_links(super::parse_references(source))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parses_inline_link() {
|
||||||
|
let source =
|
||||||
|
r"This is a [link](http://google.com) that should be expanded"
|
||||||
|
.to_string();
|
||||||
|
let target =
|
||||||
|
r"This is a link at *http://google.com* that should be expanded"
|
||||||
|
.to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parses_multiline_links() {
|
||||||
|
let source = r"This is a [link](http://google.com) that
|
||||||
|
should appear expanded. Another [location](/here/) and [another](http://gogogo)"
|
||||||
|
.to_string();
|
||||||
|
let target = r"This is a link at *http://google.com* that
|
||||||
|
should appear expanded. Another location at */here/* and another at *http://gogogo*"
|
||||||
|
.to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parses_reference() {
|
||||||
|
let source = r"This is a [link][theref].
|
||||||
|
[theref]: http://example.com/foo
|
||||||
|
more text"
|
||||||
|
.to_string();
|
||||||
|
let target = r"This is a link at *http://example.com/foo*.
|
||||||
|
more text"
|
||||||
|
.to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parses_implicit_link() {
|
||||||
|
let source = r"This is an [implicit][] link.
|
||||||
|
[implicit]: /The Link/"
|
||||||
|
.to_string();
|
||||||
|
let target = r"This is an implicit at */The Link/* link.".to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
#[test]
|
||||||
|
fn parses_refs_with_one_space_indentation() {
|
||||||
|
let source = r"This is a [link][ref]
|
||||||
|
[ref]: The link"
|
||||||
|
.to_string();
|
||||||
|
let target = r"This is a link at *The link*".to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parses_refs_with_two_space_indentation() {
|
||||||
|
let source = r"This is a [link][ref]
|
||||||
|
[ref]: The link"
|
||||||
|
.to_string();
|
||||||
|
let target = r"This is a link at *The link*".to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parses_refs_with_three_space_indentation() {
|
||||||
|
let source = r"This is a [link][ref]
|
||||||
|
[ref]: The link"
|
||||||
|
.to_string();
|
||||||
|
let target = r"This is a link at *The link*".to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
#[should_panic]
|
||||||
|
fn rejects_refs_with_four_space_indentation() {
|
||||||
|
let source = r"This is a [link][ref]
|
||||||
|
[ref]: The link"
|
||||||
|
.to_string();
|
||||||
|
let target = r"This is a link at *The link*".to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn ignores_optional_inline_title() {
|
||||||
|
let source =
|
||||||
|
r###"This is a titled [link](http://example.com "My title")."###
|
||||||
|
.to_string();
|
||||||
|
let target =
|
||||||
|
r"This is a titled link at *http://example.com*.".to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parses_title_with_puctuation() {
|
||||||
|
let source =
|
||||||
|
r###"[link](http://example.com "It's Title")"###.to_string();
|
||||||
|
let target = r"link at *http://example.com*".to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parses_name_with_punctuation() {
|
||||||
|
let source = r###"[I'm here](there)"###.to_string();
|
||||||
|
let target = r###"I'm here at *there*"###.to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
#[test]
|
||||||
|
fn parses_name_with_utf8() {
|
||||||
|
let source = r###"[user’s forum](the user’s forum)"###.to_string();
|
||||||
|
let target =
|
||||||
|
r###"user’s forum at *the user’s forum*"###.to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parses_reference_with_punctuation() {
|
||||||
|
let source = r###"[link][the ref-ref]
|
||||||
|
[the ref-ref]:http://example.com/ref-ref"###
|
||||||
|
.to_string();
|
||||||
|
let target = r###"link at *http://example.com/ref-ref*"###.to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parses_reference_case_insensitively() {
|
||||||
|
let source = r"[link][Ref]
|
||||||
|
[ref]: The reference"
|
||||||
|
.to_string();
|
||||||
|
let target = r"link at *The reference*".to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
#[test]
|
||||||
|
fn parses_link_as_reference_when_reference_is_empty() {
|
||||||
|
let source = r"[link as reference][]
|
||||||
|
[link as reference]: the actual reference"
|
||||||
|
.to_string();
|
||||||
|
let target = r"link as reference at *the actual reference*".to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parses_link_without_reference_as_reference() {
|
||||||
|
let source = r"[link] is alone
|
||||||
|
[link]: The contents"
|
||||||
|
.to_string();
|
||||||
|
let target = r"link at *The contents* is alone".to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
#[ignore]
|
||||||
|
fn parses_link_without_reference_as_reference_with_asterisks() {
|
||||||
|
let source = r"*[link]* is alone
|
||||||
|
[link]: The contents"
|
||||||
|
.to_string();
|
||||||
|
let target = r"*link* at *The contents* is alone".to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
#[test]
|
||||||
|
fn ignores_links_in_pre_sections() {
|
||||||
|
let source = r###"```toml
|
||||||
|
[package]
|
||||||
|
name = "hello_cargo"
|
||||||
|
version = "0.1.0"
|
||||||
|
authors = ["Your Name <you@example.com>"]
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
```
|
||||||
|
"###
|
||||||
|
.to_string();
|
||||||
|
let target = source.clone();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn ignores_links_in_quoted_sections() {
|
||||||
|
let source = r###"do not change `[package]`."###.to_string();
|
||||||
|
let target = source.clone();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
#[test]
|
||||||
|
fn ignores_links_in_quoted_sections_containing_newlines() {
|
||||||
|
let source = r"do not change `this [package]
|
||||||
|
is still here` [link](ref)"
|
||||||
|
.to_string();
|
||||||
|
let target = r"do not change `this [package]
|
||||||
|
is still here` link at *ref*"
|
||||||
|
.to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn ignores_links_in_pre_sections_while_still_handling_links() {
|
||||||
|
let source = r###"```toml
|
||||||
|
[package]
|
||||||
|
name = "hello_cargo"
|
||||||
|
version = "0.1.0"
|
||||||
|
authors = ["Your Name <you@example.com>"]
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
```
|
||||||
|
Another [link]
|
||||||
|
more text
|
||||||
|
[link]: http://gohere
|
||||||
|
"###
|
||||||
|
.to_string();
|
||||||
|
let target = r###"```toml
|
||||||
|
[package]
|
||||||
|
name = "hello_cargo"
|
||||||
|
version = "0.1.0"
|
||||||
|
authors = ["Your Name <you@example.com>"]
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
```
|
||||||
|
Another link at *http://gohere*
|
||||||
|
more text
|
||||||
|
"###
|
||||||
|
.to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
#[test]
|
||||||
|
fn ignores_quotes_in_pre_sections() {
|
||||||
|
let source = r###"```bash
|
||||||
|
$ cargo build
|
||||||
|
Compiling guessing_game v0.1.0 (file:///projects/guessing_game)
|
||||||
|
src/main.rs:23:21: 23:35 error: mismatched types [E0308]
|
||||||
|
src/main.rs:23 match guess.cmp(&secret_number) {
|
||||||
|
^~~~~~~~~~~~~~
|
||||||
|
src/main.rs:23:21: 23:35 help: run `rustc --explain E0308` to see a detailed explanation
|
||||||
|
src/main.rs:23:21: 23:35 note: expected type `&std::string::String`
|
||||||
|
src/main.rs:23:21: 23:35 note: found type `&_`
|
||||||
|
error: aborting due to previous error
|
||||||
|
Could not compile `guessing_game`.
|
||||||
|
```
|
||||||
|
"###
|
||||||
|
.to_string();
|
||||||
|
let target = source.clone();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
#[test]
|
||||||
|
fn ignores_short_quotes() {
|
||||||
|
let source = r"to `1` at index `[0]` i".to_string();
|
||||||
|
let target = source.clone();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
#[test]
|
||||||
|
fn ignores_pre_sections_with_final_quote() {
|
||||||
|
let source = r###"```bash
|
||||||
|
$ cargo run
|
||||||
|
Compiling points v0.1.0 (file:///projects/points)
|
||||||
|
error: the trait bound `Point: std::fmt::Display` is not satisfied [--explain E0277]
|
||||||
|
--> src/main.rs:8:29
|
||||||
|
8 |> println!("Point 1: {}", p1);
|
||||||
|
|> ^^
|
||||||
|
<std macros>:2:27: 2:58: note: in this expansion of format_args!
|
||||||
|
<std macros>:3:1: 3:54: note: in this expansion of print! (defined in <std macros>)
|
||||||
|
src/main.rs:8:5: 8:33: note: in this expansion of println! (defined in <std macros>)
|
||||||
|
note: `Point` cannot be formatted with the default formatter; try using `:?` instead if you are using a format string
|
||||||
|
note: required by `std::fmt::Display::fmt`
|
||||||
|
```
|
||||||
|
`here` is another [link](the ref)
|
||||||
|
"###.to_string();
|
||||||
|
let target = r###"```bash
|
||||||
|
$ cargo run
|
||||||
|
Compiling points v0.1.0 (file:///projects/points)
|
||||||
|
error: the trait bound `Point: std::fmt::Display` is not satisfied [--explain E0277]
|
||||||
|
--> src/main.rs:8:29
|
||||||
|
8 |> println!("Point 1: {}", p1);
|
||||||
|
|> ^^
|
||||||
|
<std macros>:2:27: 2:58: note: in this expansion of format_args!
|
||||||
|
<std macros>:3:1: 3:54: note: in this expansion of print! (defined in <std macros>)
|
||||||
|
src/main.rs:8:5: 8:33: note: in this expansion of println! (defined in <std macros>)
|
||||||
|
note: `Point` cannot be formatted with the default formatter; try using `:?` instead if you are using a format string
|
||||||
|
note: required by `std::fmt::Display::fmt`
|
||||||
|
```
|
||||||
|
`here` is another link at *the ref*
|
||||||
|
"###.to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
#[test]
|
||||||
|
fn parses_adam_p_cheatsheet() {
|
||||||
|
let source = r###"[I'm an inline-style link](https://www.google.com)
|
||||||
|
|
||||||
|
[I'm an inline-style link with title](https://www.google.com "Google's Homepage")
|
||||||
|
|
||||||
|
[I'm a reference-style link][Arbitrary case-insensitive reference text]
|
||||||
|
|
||||||
|
[I'm a relative reference to a repository file](../blob/master/LICENSE)
|
||||||
|
|
||||||
|
[You can use numbers for reference-style link definitions][1]
|
||||||
|
|
||||||
|
Or leave it empty and use the [link text itself][].
|
||||||
|
|
||||||
|
URLs and URLs in angle brackets will automatically get turned into links.
|
||||||
|
http://www.example.com or <http://www.example.com> and sometimes
|
||||||
|
example.com (but not on Github, for example).
|
||||||
|
|
||||||
|
Some text to show that the reference links can follow later.
|
||||||
|
|
||||||
|
[arbitrary case-insensitive reference text]: https://www.mozilla.org
|
||||||
|
[1]: http://slashdot.org
|
||||||
|
[link text itself]: http://www.reddit.com"###
|
||||||
|
.to_string();
|
||||||
|
|
||||||
|
let target = r###"I'm an inline-style link at *https://www.google.com*
|
||||||
|
|
||||||
|
I'm an inline-style link with title at *https://www.google.com*
|
||||||
|
|
||||||
|
I'm a reference-style link at *https://www.mozilla.org*
|
||||||
|
|
||||||
|
I'm a relative reference to a repository file at *../blob/master/LICENSE*
|
||||||
|
|
||||||
|
You can use numbers for reference-style link definitions at *http://slashdot.org*
|
||||||
|
|
||||||
|
Or leave it empty and use the link text itself at *http://www.reddit.com*.
|
||||||
|
|
||||||
|
URLs and URLs in angle brackets will automatically get turned into links.
|
||||||
|
http://www.example.com or <http://www.example.com> and sometimes
|
||||||
|
example.com (but not on Github, for example).
|
||||||
|
|
||||||
|
Some text to show that the reference links can follow later.
|
||||||
|
"###
|
||||||
|
.to_string();
|
||||||
|
assert_eq!(parse(source), target);
|
||||||
|
}
|
||||||
|
}
|
||||||
159
tools/src/bin/release_listings.rs
Normal file
159
tools/src/bin/release_listings.rs
Normal file
@@ -0,0 +1,159 @@
|
|||||||
|
#[macro_use]
|
||||||
|
extern crate lazy_static;
|
||||||
|
|
||||||
|
use regex::Regex;
|
||||||
|
use std::error::Error;
|
||||||
|
use std::fs;
|
||||||
|
use std::fs::File;
|
||||||
|
use std::io::prelude::*;
|
||||||
|
use std::io::{BufReader, BufWriter};
|
||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
|
||||||
|
fn main() -> Result<(), Box<dyn Error>> {
|
||||||
|
// Get all listings from the `listings` directory
|
||||||
|
let listings_dir = Path::new("listings");
|
||||||
|
|
||||||
|
// Put the results in the `tmp/listings` directory
|
||||||
|
let out_dir = Path::new("tmp/listings");
|
||||||
|
|
||||||
|
// Clear out any existing content in `tmp/listings`
|
||||||
|
if out_dir.is_dir() {
|
||||||
|
fs::remove_dir_all(out_dir)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create a new, empty `tmp/listings` directory
|
||||||
|
fs::create_dir(out_dir)?;
|
||||||
|
|
||||||
|
// For each chapter in the `listings` directory,
|
||||||
|
for chapter in fs::read_dir(listings_dir)? {
|
||||||
|
let chapter = chapter?;
|
||||||
|
let chapter_path = chapter.path();
|
||||||
|
|
||||||
|
let chapter_name = chapter_path
|
||||||
|
.file_name()
|
||||||
|
.expect("Chapter should've had a name");
|
||||||
|
|
||||||
|
// Create a corresponding chapter dir in `tmp/listings`
|
||||||
|
let output_chapter_path = out_dir.join(chapter_name);
|
||||||
|
fs::create_dir(&output_chapter_path)?;
|
||||||
|
|
||||||
|
// For each listing in the chapter directory,
|
||||||
|
for listing in fs::read_dir(chapter_path)? {
|
||||||
|
let listing = listing?;
|
||||||
|
let listing_path = listing.path();
|
||||||
|
|
||||||
|
let listing_name = listing_path
|
||||||
|
.file_name()
|
||||||
|
.expect("Listing should've had a name");
|
||||||
|
|
||||||
|
// Create a corresponding listing dir in the tmp chapter dir
|
||||||
|
let output_listing_dir = output_chapter_path.join(listing_name);
|
||||||
|
fs::create_dir(&output_listing_dir)?;
|
||||||
|
|
||||||
|
// Copy all the cleaned files in the listing to the tmp directory
|
||||||
|
copy_cleaned_listing_files(listing_path, output_listing_dir)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create a compressed archive of all the listings
|
||||||
|
let tarfile = File::create("tmp/listings.tar.gz")?;
|
||||||
|
let encoder =
|
||||||
|
flate2::write::GzEncoder::new(tarfile, flate2::Compression::default());
|
||||||
|
let mut archive = tar::Builder::new(encoder);
|
||||||
|
archive.append_dir_all("listings", "tmp/listings")?;
|
||||||
|
|
||||||
|
// Assure whoever is running this that the script exiting successfully, and remind them
|
||||||
|
// where the generated file ends up
|
||||||
|
println!("Release tarball of listings in tmp/listings.tar.gz");
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cleaned listings will not contain:
|
||||||
|
//
|
||||||
|
// - `target` directories
|
||||||
|
// - `output.txt` files used to display output in the book
|
||||||
|
// - `rustfmt-ignore` files used to signal to update-rustc.sh the listing shouldn't be formatted
|
||||||
|
// - anchor comments or snip comments
|
||||||
|
// - empty `main` functions in `lib.rs` files used to trick rustdoc
|
||||||
|
fn copy_cleaned_listing_files(
|
||||||
|
from: PathBuf,
|
||||||
|
to: PathBuf,
|
||||||
|
) -> Result<(), Box<dyn Error>> {
|
||||||
|
for item in fs::read_dir(from)? {
|
||||||
|
let item = item?;
|
||||||
|
let item_path = item.path();
|
||||||
|
|
||||||
|
let item_name =
|
||||||
|
item_path.file_name().expect("Item should've had a name");
|
||||||
|
let output_item = to.join(item_name);
|
||||||
|
|
||||||
|
if item_path.is_dir() {
|
||||||
|
// Don't copy `target` directories
|
||||||
|
if item_name != "target" {
|
||||||
|
fs::create_dir(&output_item)?;
|
||||||
|
copy_cleaned_listing_files(item_path, output_item)?;
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Don't copy output files or files that tell update-rustc.sh not to format
|
||||||
|
if item_name != "output.txt" && item_name != "rustfmt-ignore" {
|
||||||
|
let item_extension = item_path.extension();
|
||||||
|
if item_extension.is_some() && item_extension.unwrap() == "rs" {
|
||||||
|
copy_cleaned_rust_file(
|
||||||
|
item_name,
|
||||||
|
&item_path,
|
||||||
|
&output_item,
|
||||||
|
)?;
|
||||||
|
} else {
|
||||||
|
// Copy any non-Rust files without modification
|
||||||
|
fs::copy(item_path, output_item)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
lazy_static! {
|
||||||
|
static ref ANCHOR_OR_SNIP_COMMENTS: Regex = Regex::new(
|
||||||
|
r"(?x)
|
||||||
|
//\s*ANCHOR:\s*[\w_-]+ # Remove all anchor comments
|
||||||
|
|
|
||||||
|
//\s*ANCHOR_END:\s*[\w_-]+ # Remove all anchor ending comments
|
||||||
|
|
|
||||||
|
//\s*--snip-- # Remove all snip comments
|
||||||
|
"
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
lazy_static! {
|
||||||
|
static ref EMPTY_MAIN: Regex = Regex::new(r"fn main\(\) \{}").unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cleaned Rust files will not contain:
|
||||||
|
//
|
||||||
|
// - anchor comments or snip comments
|
||||||
|
// - empty `main` functions in `lib.rs` files used to trick rustdoc
|
||||||
|
fn copy_cleaned_rust_file(
|
||||||
|
item_name: &std::ffi::OsStr,
|
||||||
|
from: &PathBuf,
|
||||||
|
to: &PathBuf,
|
||||||
|
) -> Result<(), Box<dyn Error>> {
|
||||||
|
let from_buf = BufReader::new(File::open(from)?);
|
||||||
|
let mut to_buf = BufWriter::new(File::create(to)?);
|
||||||
|
|
||||||
|
for line in from_buf.lines() {
|
||||||
|
let line = line?;
|
||||||
|
if !ANCHOR_OR_SNIP_COMMENTS.is_match(&line) {
|
||||||
|
if item_name != "lib.rs" || !EMPTY_MAIN.is_match(&line) {
|
||||||
|
writeln!(&mut to_buf, "{}", line)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
to_buf.flush()?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
83
tools/src/bin/remove_hidden_lines.rs
Normal file
83
tools/src/bin/remove_hidden_lines.rs
Normal file
@@ -0,0 +1,83 @@
|
|||||||
|
use std::io;
|
||||||
|
use std::io::prelude::*;
|
||||||
|
|
||||||
|
fn main() {
|
||||||
|
write_md(remove_hidden_lines(&read_md()));
|
||||||
|
}
|
||||||
|
|
||||||
|
fn read_md() -> String {
|
||||||
|
let mut buffer = String::new();
|
||||||
|
match io::stdin().read_to_string(&mut buffer) {
|
||||||
|
Ok(_) => buffer,
|
||||||
|
Err(error) => panic!("{}", error),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn write_md(output: String) {
|
||||||
|
write!(io::stdout(), "{}", output).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
fn remove_hidden_lines(input: &str) -> String {
|
||||||
|
let mut resulting_lines = vec![];
|
||||||
|
let mut within_codeblock = false;
|
||||||
|
|
||||||
|
for line in input.lines() {
|
||||||
|
if line.starts_with("```") {
|
||||||
|
within_codeblock = !within_codeblock;
|
||||||
|
}
|
||||||
|
|
||||||
|
if !within_codeblock || (!line.starts_with("# ") && line != "#") {
|
||||||
|
resulting_lines.push(line)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resulting_lines.join("\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use crate::remove_hidden_lines;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn hidden_line_in_code_block_is_removed() {
|
||||||
|
let input = r#"
|
||||||
|
In this listing:
|
||||||
|
|
||||||
|
```
|
||||||
|
fn main() {
|
||||||
|
# secret
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
you can see that...
|
||||||
|
"#;
|
||||||
|
let output = remove_hidden_lines(input);
|
||||||
|
|
||||||
|
let desired_output = r#"
|
||||||
|
In this listing:
|
||||||
|
|
||||||
|
```
|
||||||
|
fn main() {
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
you can see that...
|
||||||
|
"#;
|
||||||
|
|
||||||
|
assert_eq!(output, desired_output);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn headings_arent_removed() {
|
||||||
|
let input = r#"
|
||||||
|
# Heading 1
|
||||||
|
"#;
|
||||||
|
let output = remove_hidden_lines(input);
|
||||||
|
|
||||||
|
let desired_output = r#"
|
||||||
|
# Heading 1
|
||||||
|
"#;
|
||||||
|
|
||||||
|
assert_eq!(output, desired_output);
|
||||||
|
}
|
||||||
|
}
|
||||||
45
tools/src/bin/remove_links.rs
Normal file
45
tools/src/bin/remove_links.rs
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
extern crate regex;
|
||||||
|
|
||||||
|
use regex::{Captures, Regex};
|
||||||
|
use std::collections::HashSet;
|
||||||
|
use std::io;
|
||||||
|
use std::io::{Read, Write};
|
||||||
|
|
||||||
|
fn main() {
|
||||||
|
let mut buffer = String::new();
|
||||||
|
if let Err(e) = io::stdin().read_to_string(&mut buffer) {
|
||||||
|
panic!("{}", e);
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut refs = HashSet::new();
|
||||||
|
|
||||||
|
// Capture all links and link references.
|
||||||
|
let regex =
|
||||||
|
r"\[([^\]]+)\](?:(?:\[([^\]]+)\])|(?:\([^\)]+\)))(?i)<!--\signore\s-->";
|
||||||
|
let link_regex = Regex::new(regex).unwrap();
|
||||||
|
let first_pass = link_regex.replace_all(&buffer, |caps: &Captures<'_>| {
|
||||||
|
// Save the link reference we want to delete.
|
||||||
|
if let Some(reference) = caps.get(2) {
|
||||||
|
refs.insert(reference.as_str().to_string());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Put the link title back.
|
||||||
|
caps.get(1).unwrap().as_str().to_string()
|
||||||
|
});
|
||||||
|
|
||||||
|
// Search for the references we need to delete.
|
||||||
|
let ref_regex = Regex::new(r"(?m)^\[([^\]]+)\]:\s.*\n").unwrap();
|
||||||
|
let out = ref_regex.replace_all(&first_pass, |caps: &Captures<'_>| {
|
||||||
|
let capture = caps.get(1).unwrap().to_owned();
|
||||||
|
|
||||||
|
// Check if we've marked this reference for deletion ...
|
||||||
|
if refs.contains(capture.as_str()) {
|
||||||
|
return "".to_string();
|
||||||
|
}
|
||||||
|
|
||||||
|
// ... else we put back everything we captured.
|
||||||
|
caps.get(0).unwrap().as_str().to_string()
|
||||||
|
});
|
||||||
|
|
||||||
|
write!(io::stdout(), "{}", out).unwrap();
|
||||||
|
}
|
||||||
51
tools/src/bin/remove_markup.rs
Normal file
51
tools/src/bin/remove_markup.rs
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
extern crate regex;
|
||||||
|
|
||||||
|
use regex::{Captures, Regex};
|
||||||
|
use std::io;
|
||||||
|
use std::io::{Read, Write};
|
||||||
|
|
||||||
|
fn main() {
|
||||||
|
write_md(remove_markup(read_md()));
|
||||||
|
}
|
||||||
|
|
||||||
|
fn read_md() -> String {
|
||||||
|
let mut buffer = String::new();
|
||||||
|
match io::stdin().read_to_string(&mut buffer) {
|
||||||
|
Ok(_) => buffer,
|
||||||
|
Err(error) => panic!("{}", error),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn write_md(output: String) {
|
||||||
|
write!(io::stdout(), "{}", output).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
fn remove_markup(input: String) -> String {
|
||||||
|
let filename_regex =
|
||||||
|
Regex::new(r#"\A<span class="filename">(.*)</span>\z"#).unwrap();
|
||||||
|
// Captions sometimes take up multiple lines.
|
||||||
|
let caption_start_regex =
|
||||||
|
Regex::new(r#"\A<span class="caption">(.*)\z"#).unwrap();
|
||||||
|
let caption_end_regex = Regex::new(r#"(.*)</span>\z"#).unwrap();
|
||||||
|
let regexen = vec![filename_regex, caption_start_regex, caption_end_regex];
|
||||||
|
|
||||||
|
let lines: Vec<_> = input
|
||||||
|
.lines()
|
||||||
|
.flat_map(|line| {
|
||||||
|
// Remove our syntax highlighting and rustdoc markers.
|
||||||
|
if line.starts_with("```") {
|
||||||
|
Some(String::from("```"))
|
||||||
|
// Remove the span around filenames and captions.
|
||||||
|
} else {
|
||||||
|
let result =
|
||||||
|
regexen.iter().fold(line.to_string(), |result, regex| {
|
||||||
|
regex.replace_all(&result, |caps: &Captures<'_>| {
|
||||||
|
caps.get(1).unwrap().as_str().to_string()
|
||||||
|
}).to_string()
|
||||||
|
});
|
||||||
|
Some(result)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
lines.join("\n")
|
||||||
|
}
|
||||||
89
tools/update-rustc.sh
Executable file
89
tools/update-rustc.sh
Executable file
@@ -0,0 +1,89 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -eu
|
||||||
|
|
||||||
|
# Build the book before making any changes for comparison of the output.
|
||||||
|
echo 'Building book into `tmp/book-before` before updating...'
|
||||||
|
mdbook build -d tmp/book-before
|
||||||
|
|
||||||
|
# Rustfmt all listings
|
||||||
|
echo 'Formatting all listings...'
|
||||||
|
find -s listings -name Cargo.toml -print0 | while IFS= read -r -d '' f; do
|
||||||
|
dir_to_fmt=$(dirname $f)
|
||||||
|
|
||||||
|
# There are a handful of listings we don't want to rustfmt and skipping doesn't work;
|
||||||
|
# those will have a file in their directory that explains why.
|
||||||
|
if [ ! -f "${dir_to_fmt}/rustfmt-ignore" ]; then
|
||||||
|
cd $dir_to_fmt
|
||||||
|
cargo fmt --all && true
|
||||||
|
cd - > /dev/null
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# Get listings without anchor comments in tmp by compiling a release listings artifact
|
||||||
|
echo 'Generate listings without anchor comments...'
|
||||||
|
cargo run --bin release_listings
|
||||||
|
|
||||||
|
root_dir=$(pwd)
|
||||||
|
|
||||||
|
echo 'Regenerating output...'
|
||||||
|
# For any listings where we show the output,
|
||||||
|
find -s listings -name output.txt -print0 | while IFS= read -r -d '' f; do
|
||||||
|
build_directory=$(dirname $f)
|
||||||
|
full_build_directory="${root_dir}/${build_directory}"
|
||||||
|
full_output_path="${full_build_directory}/output.txt"
|
||||||
|
tmp_build_directory="tmp/${build_directory}"
|
||||||
|
|
||||||
|
cd $tmp_build_directory
|
||||||
|
|
||||||
|
# Save the previous compile time; we're going to keep it to minimize diff
|
||||||
|
# churn
|
||||||
|
compile_time=$(sed -E -ne "s/.*Finished (dev|test) \[unoptimized \+ debuginfo] target\(s\) in ([0-9.]*).*/\2/p" ${full_output_path})
|
||||||
|
|
||||||
|
# Save the hash from the first test binary; we're going to keep it to
|
||||||
|
# minimize diff churn
|
||||||
|
test_binary_hash=$(sed -E -ne 's@.*Running [^[:space:]]+ \(target/debug/deps/[^-]*-([^\s]*)\)@\1@p' ${full_output_path} | head -n 1)
|
||||||
|
|
||||||
|
# Act like this is the first time this listing has been built
|
||||||
|
cargo clean
|
||||||
|
|
||||||
|
# Run the command in the existing output file
|
||||||
|
cargo_command=$(sed -ne "s/$ \(.*\)/\1/p" ${full_output_path})
|
||||||
|
|
||||||
|
# Clear the output file of everything except the command
|
||||||
|
echo "$ ${cargo_command}" > ${full_output_path}
|
||||||
|
|
||||||
|
# Regenerate the output and append to the output file. Turn some warnings
|
||||||
|
# off to reduce output noise, and use one test thread to get consistent
|
||||||
|
# ordering of tests in the output when the command is `cargo test`.
|
||||||
|
RUSTFLAGS="-A unused_variables -A dead_code" RUST_TEST_THREADS=1 $cargo_command >> ${full_output_path} 2>&1 || true
|
||||||
|
|
||||||
|
# Set the project file path to the projects directory plus the crate name instead of a path
|
||||||
|
# to the computer of whoever is running this
|
||||||
|
sed -i '' -E -e "s/(Compiling|Checking) ([^\)]*) v0.1.0 (.*)/\1 \2 v0.1.0 (file:\/\/\/projects\/\2)/" ${full_output_path}
|
||||||
|
|
||||||
|
# Restore the previous compile time, if there is one
|
||||||
|
if [ -n "${compile_time}" ]; then
|
||||||
|
sed -i '' -E -e "s/Finished (dev|test) \[unoptimized \+ debuginfo] target\(s\) in [0-9.]*/Finished \1 [unoptimized + debuginfo] target(s) in ${compile_time}/" ${full_output_path}
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Restore the previous test binary hash, if there is one
|
||||||
|
if [ -n "${test_binary_hash}" ]; then
|
||||||
|
replacement='s@Running ([^[:space:]]+) \(target/debug/deps/([^-]*)-([^\s]*)\)@Running \1 (target/debug/deps/\2-'
|
||||||
|
replacement+="${test_binary_hash}"
|
||||||
|
replacement+=')@g'
|
||||||
|
sed -i '' -E -e "${replacement}" ${full_output_path}
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd - > /dev/null
|
||||||
|
done
|
||||||
|
|
||||||
|
# Build the book after making all the changes
|
||||||
|
echo 'Building book into `tmp/book-after` after updating...'
|
||||||
|
mdbook build -d tmp/book-after
|
||||||
|
|
||||||
|
# Run the megadiff script that removes all files that are the same, leaving only files to audit
|
||||||
|
echo 'Removing tmp files that had no changes from the update...'
|
||||||
|
./tools/megadiff.sh
|
||||||
|
|
||||||
|
echo 'Done.'
|
||||||
Reference in New Issue
Block a user