initial commit: The OrbTK book source tree

All files have a strong relation to the book sources of the
rust-lang project itself. This may help to lower the burden
for intrested people to get involved in OrbTK as well as
reuse workflow habits.

* LICENSE-MIT: The projekt licensing terms
* README.md: Github frontpage
* CONTIRBUTING.md: Advises on howto help improving the book
* style-guide.md: Advises on howto improve the readability of
  generated prose and code.
* tools: layout helper scripts and rust-code
* ci: continius integration helper scripts
* .gitattributes: set git default behaviours
* .gitignore: keep source tree sane
* Cargo.toml: package dependencies
* rustfmt.toml: formatting rules for rust code
* book.toml: mdBook dependencies

Signed-off-by: Ralf Zerres <ralf.zerres@networkx.de>
This commit is contained in:
2020-10-04 15:37:06 +02:00
commit 88233508fe
29 changed files with 2620 additions and 0 deletions

6
.gitattributes vendored Normal file
View File

@@ -0,0 +1,6 @@
# Set the default behavior, in case people don't have core.autocrlf set.
* text=auto eol=lf
*.docx binary
*.odt binary
*.png binary

64
.github/workflows/main.yml vendored Normal file
View File

@@ -0,0 +1,64 @@
name: CI
on: [push, pull_request]
jobs:
test:
name: Run tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- name: Update rustup
run: rustup self update
- name: Install Rust
run: |
rustup set profile minimal
rustup toolchain install 1.41.0 -c rust-docs
rustup default 1.41.0
- name: Install mdbook
run: |
mkdir bin
curl -sSL https://github.com/rust-lang/mdBook/releases/download/v0.3.7/mdbook-v0.3.7-x86_64-unknown-linux-gnu.tar.gz | tar -xz --directory=bin
echo "##[add-path]$(pwd)/bin"
- name: Report versions
run: |
rustup --version
rustc -Vv
mdbook --version
- name: Run tests
run: mdbook test
lint:
name: Run lints
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- name: Update rustup
run: rustup self update
- name: Install Rust
run: |
rustup set profile minimal
rustup toolchain install nightly -c rust-docs
rustup default nightly
- name: Install mdbook
run: |
mkdir bin
curl -sSL https://github.com/rust-lang/mdBook/releases/download/v0.3.7/mdbook-v0.3.7-x86_64-unknown-linux-gnu.tar.gz | tar -xz --directory=bin
echo "##[add-path]$(pwd)/bin"
- name: Report versions
run: |
rustup --version
rustc -Vv
mdbook --version
- name: Spellcheck
run: bash ci/spellcheck.sh list
- name: Lint for local file paths
run: |
mdbook build
cargo run --bin lfp src
- name: Validate references
run: bash ci/validate.sh
- name: Check for broken links
run: |
curl -sSLo linkcheck.sh \
https://raw.githubusercontent.com/rust-lang/rust/master/src/tools/linkchecker/linkcheck.sh
# Cannot use --all here because of the generated redirect pages aren't available.
sh linkcheck.sh book

7
.gitignore vendored Normal file
View File

@@ -0,0 +1,7 @@
book/
*~
.idea
.DS_Store
target
tmp

43
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,43 @@
# Contributing
We'd love your help! Thanks for caring about the book.
## Licensing
This repository is under the same license as OrbTK itself, MIT. You
can find the full text the license in the `LICENSE-MIT` file in this
repository.
## Code of Conduct
The OrbTK project has [a code of conduct](https://github.com/redox-os/orbtk/policies/code-of-conduct)
that is in line with the one used in the RUST Projekt itself. It governs all sub-projects,
including this one. Please respect it!
## Review
Our [open pull requests][pulls] are new chapters or edits that we're
currently working on. We would love if you would read through those and make
comments for any suggestions or corrections!
[pulls]: https://github.com/orbtk/book/pulls
## Help wanted
If you're looking for ways to help that don't involve large amounts of
reading or writing, check out the [open issues with the E-help-wanted
label][help-wanted]. These might be small fixes to the text, OrbTK code,
or shell scripts that would help us be more efficient or enhance the book in
some way!
[help-wanted]: https://github.com/redox-os/orbtk/book/issues?q=is%3Aopen+is%3Aissue+label%3AE-help-wanted
## Translations
We'd love help translating the book! See the [Translations] label to join in
efforts that are currently in progress. Open a new issue to start working on
a new language! We're waiting on [mdbook support] for multiple languages
before we merge any in, but feel free to start!
[Translations]: https://github.com/redox-os/orbtk/book/issues?q=is%3Aopen+is%3Aissue+label%3ATranslations
[mdbook support]: https://github.com/rust-lang-nursery/mdBook/issues/5

47
Cargo.toml Normal file
View File

@@ -0,0 +1,47 @@
[package]
name = "orbtk-book"
version = "0.0.1"
authors = ["Florian Blasius, <"]
description = "The Orbital Widget Toolkit"
edition = "2018"
[[bin]]
name = "concat_chapters"
path = "tools/src/bin/concat_chapters.rs"
[[bin]]
name = "convert_quotes"
path = "tools/src/bin/convert_quotes.rs"
[[bin]]
name = "lfp"
path = "tools/src/bin/lfp.rs"
[[bin]]
name = "link2print"
path = "tools/src/bin/link2print.rs"
[[bin]]
name = "release_listings"
path = "tools/src/bin/release_listings.rs"
[[bin]]
name = "remove_hidden_lines"
path = "tools/src/bin/remove_hidden_lines.rs"
[[bin]]
name = "remove_links"
path = "tools/src/bin/remove_links.rs"
[[bin]]
name = "remove_markup"
path = "tools/src/bin/remove_markup.rs"
[dependencies]
walkdir = "2.3.1"
docopt = "1.1.0"
serde = "1.0"
regex = "1.3.3"
lazy_static = "1.4.0"
flate2 = "1.0.13"
tar = "0.4.26"

25
LICENSE-MIT Normal file
View File

@@ -0,0 +1,25 @@
Copyright (c) 2010 The Rust Project Developers
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

98
README.md Normal file
View File

@@ -0,0 +1,98 @@
# The Orbital Widget Toolkit
![Build Status](https://github.com/redox-os/orbtk/book/workflows/CI/badge.svg)
This repository contains the source of "The Orbital Widget Toolkit" book.
We will further reference it as OrbTK.
<!--
WIP: once it is ready to be shipped
[The book is available in dead-tree form from No Starch Press][nostarch].
[nostarch]: https://nostarch.com/
You can read the book for free online. Please see the book as shipped with
the latest [stable], or [develop] OrbTK releases. Be aware that issues
in those versions may have been fixed in this repository already, as those
releases are updated less frequently.
[stable]: https://doc.orbtk.org/stable/book/
[develop]: https://doc.orbtk.org/develop/book/
See the [releases] to download just the code of all the code listings that appear in the book.
[releases]: https://github.com/redox-os/orbtk/book/releases
-->
## Requirements
Building the book requires [mdBook], ideally the same version that
rust-lang/rust uses in [this file][rust-mdbook]. To get it:
[mdBook]: https://github.com/rust-lang-nursery/mdBook
[rust-mdbook]: https://github.com/rust-lang/rust/blob/master/src/tools/rustbook/Cargo.toml
```bash
$ cargo install mdbook --vers [version-num]
```
## Building
To build the book, change into this directory and type:
```bash
$ mdbook build
```
The output will be in the `book` subdirectory. To check it out, open it in
your web browser.
_Firefox:_
```bash
$ firefox book/index.html # Linux
$ open -a "Firefox" book/index.html # OS X
$ Start-Process "firefox.exe" .\book\index.html # Windows (PowerShell)
$ start firefox.exe .\book\index.html # Windows (Cmd)
```
_Chrome:_
```bash
$ google-chrome book/index.html # Linux
$ open -a "Google Chrome" book/index.html # OS X
$ Start-Process "chrome.exe" .\book\index.html # Windows (PowerShell)
$ start chrome.exe .\book\index.html # Windows (Cmd)
```
Executing `mdbook serve` will have **mdbook** act has a web service
which can be accessed opening the following URL: http://localhost:3000.
To run the tests:
```bash
$ mdbook test
```
## Contributing
We'd love your help! Please see [CONTRIBUTING.md][contrib] to learn about the
kinds of contributions we're looking for.
[contrib]: https://github.com/redox-os/orbtk/book/blob/master/CONTRIBUTING.md
### Translations
We'd love help translating the book! See the [Translations] label to join in
efforts that are currently in progress. Open a new issue to start working on
a new language! We're waiting on [mdbook support] for multiple languages
before we merge any in, but feel free to start!
[Translations]: https://github.com/redox-os/orbtk/book/issues?q=is%3Aopen+is%3Aissue+label%3ATranslations
[mdbook support]: https://github.com/redox-os/orbtk/rust-lang-nursery/mdBook/issues/5
## Spellchecking
To scan source files for spelling errors, you can use the `spellcheck.sh`
script. It needs a dictionary of valid words, which is provided in
`dictionary.txt`. If the script produces a false positive (say, you used word
`BTreeMap` which the script considers invalid), you need to add this word to
`dictionary.txt` (keep the sorted order for consistency).

21
book.toml Normal file
View File

@@ -0,0 +1,21 @@
[book]
title = "The Orbital Widget Toolkit"
description = "The Orbital Widget Toolkit is a multi platform toolkit, that enables you to build scalable user interfaces. All components are devoloped with the programming language Rust."
authors = ["Florian Blasius, with Contributions from the Rust Community"]
language = "en"
[build]
create-missing = false
[output.html]
additional-css = ["theme/2020-edition.css"]
git-repository-url = "https://github.com/redox-os/orbtk/book"
[output.linkcheck]
# Should we check links on the internet? Enabling this option adds a
# non-negligible performance impact
follow-web-links = false
# Are we allowed to link to files outside of the book's root directory? This
# may help prevent linking to sensitive files (e.g. "../../../../etc/shadow")
traverse-parent-directories = false

553
ci/dictionary.txt Normal file
View File

@@ -0,0 +1,553 @@
personal_ws-1.1 en 0 utf-8
abcabcabc
abcd
abcdefghijklmnopqrstuvwxyz
adaptor
adaptors
AddAssign
Addr
afdc
aggregator
AGraph
aliasability
alignof
alloc
allocator
Amir
anotherusername
APIs
app's
aren
args
ArgumentV
associativity
async
atomics
attr
autocompletion
AveragedCollection
backend
backported
backtrace
backtraces
BACKTRACE
Backtraces
Baz's
benchmarking
bioinformatics
bitand
BitAnd
BitAndAssign
bitor
BitOr
BitOrAssign
bitwise
Bitwise
bitxor
BitXor
BitXorAssign
Bjarne
Boehm
bool
Boolean
Booleans
Bors
BorrowMutError
BoxMeUp
BTreeSet
BuildHasher
Cacher
Cagain
callsite
CamelCase
cargodoc
ChangeColor
ChangeColorMessage
charset
choo
chXX
chYY
clippy
clippy's
cmdlet
coercions
combinator
ConcreteType
config
Config
const
consts
constant's
copyeditor
couldn
CPUs
cratesio
CRLF
cryptocurrencies
cryptographic
cryptographically
CStr
CString
ctrl
Ctrl
customizable
CustomSmartPointer
CustomSmartPointers
data's
DataStruct
deallocate
deallocated
deallocating
deallocation
debuginfo
decl
decrementing
deduplicate
deduplicating
deps
deref
Deref
dereference
Dereference
dereferenced
dereferences
dereferencing
DerefMut
DeriveInput
destructor
destructure
destructured
destructures
destructuring
Destructuring
deterministically
DevOps
didn
Dobrý
doccargo
doccratesio
DOCTYPE
doesn
disambiguating
DisplayBacktrace
DivAssign
DraftPost
DSTs
ebook
ebooks
Edsger
egular
else's
emoji
encodings
enum
Enum
enums
enum's
Enums
eprintln
Erlang
ErrorKind
executables
expr
extern
favicon
ferris
FFFD
FFFF
figcaption
fieldname
filename
Filename
filesystem
Filesystem
filesystem's
filesystems
Firefox
FnMut
FnOnce
formatter
formatters
FrenchToast
FromIterator
frontend
getter
GGraph
GitHub
gitignore
grapheme
Grapheme
growable
gzip
hardcode
hardcoded
hardcoding
hasher
hashers
HashMap
HashSet
Haskell
hasn
HeadB
HeadC
HelloMacro
helloworld
HelloWorld
HelloWorldName
Hmmm
Hoare
Hola
homogenous
html
https
hyperoptimize
hypotheticals
Iceburgh
ident
IDE
IDEs
IDE's
IEEE
impl
implementor
implementors
ImportantExcerpt
incrementing
IndexMut
indices
init
initializer
initializers
inline
instantiation
internet
interoperate
IntoIterator
InvalidDigit
invariants
ioerror
iokind
ioresult
IoResult
iostdin
IpAddr
IpAddrKind
irst
isize
iter
iterator's
JavaScript
JoinHandle
Kay's
kinded
Klabnik
lang
LastWriteTime
latin
liballoc
libc
libcollections
libcore
libpanic
librarys
libreoffice
libstd
libunwind
lifecycle
LimitTracker
linter
LLVM
lobally
locators
LockResult
login
lookup
loopback
lossy
lval
macOS
Matsakis
mathematic
memoization
metadata
Metadata
metaprogramming
mibbit
Mibbit
millis
minigrep
mixup
mkdir
MockMessenger
modifiability
modularity
monomorphization
Monomorphization
monomorphized
MoveMessage
Mozilla
mpsc
msvc
MulAssign
multibyte
multithreaded
mutex
mutex's
Mutex
mutexes
Mutexes
MutexGuard
mutext
MyBox
myprogram
namespace
namespaced
namespaces
namespacing
natively
newfound
NewJob
NewsArticle
NewThread
newtype
newtypes
nitty
nocapture
nomicon
nonadministrators
nondeterministic
nonequality
nongeneric
NotFound
nsprust
null's
OCaml
offsetof
online
OpenGL
optimizations
OptionalFloatingPointNumber
OptionalNumber
OsStr
OsString
other's
OutlinePrint
overloadable
overread
PanicPayload
param
parameterize
ParseIntError
PartialEq
PartialOrd
pbcopy
PendingReview
PendingReviewPost
PlaceholderType
polymorphism
PoolCreationError
portia
powershell
PowerShell
powi
preallocate
preallocates
preprocessing
Preprocessing
preprocessor
PrimaryColor
println
priv
proc
proto
pthreads
pushups
QuitMessage
quux
RAII
randcrate
RangeFrom
RangeTo
RangeFull
README
READMEs
rect
recurse
recv
redeclaring
Refactoring
refactor
refactoring
refcell
RefCell
refcellt
RefMut
reformats
refutability
reimplement
RemAssign
repr
representable
request's
resizes
resizing
retweet
rewordings
rint
ripgrep
runnable
runtime
runtimes
Rustacean
Rustaceans
rUsT
rustc
rustdoc
Rustonomicon
rustfix
rustfmt
rustup
sampleproject
screenshot
searchstring
SecondaryColor
SelectBox
semver
SemVer
serde
ShlAssign
ShrAssign
shouldn
Simula
siphash
situps
sizeof
SliceIndex
Smalltalk
snuck
someproject
someusername
SPDX
spdx
SpreadsheetCell
sqrt
stackoverflow
startup
StaticRef
stderr
stdin
Stdin
stdlib
stdout
steveklabnik's
stringify
Stroustrup
Stroustrup's
struct
Struct
structs
struct's
Structs
StrWrap
SubAssign
subclasses
subcommand
subcommands
subdirectories
subdirectory
submodule
submodules
Submodules
suboptimal
subpath
substring
subteams
subtree
subtyping
summarizable
supertrait
supertraits
TcpListener
TcpStream
templating
test's
TextField
That'd
there'd
ThreadPool
timestamp
Tiếng
timeline
tlborm
tlsv
TODO
TokenStream
toml
TOML
toolchain
toolchains
ToString
tradeoff
tradeoffs
TrafficLight
transcoding
trpl
tuesday
tuple
tuples
turbofish
Turon
typeof
TypeName
UFCS
unary
Unary
uncomment
Uncomment
uncommenting
unevaluated
Uninstalling
uninstall
unix
unpopulated
unoptimized
UnsafeCell
unsafety
unsized
unsynchronized
URIs
UsefulType
username
USERPROFILE
usize
UsState
utils
vals
variable's
variant's
vers
versa
vert
Versioning
visualstudio
Vlissides
vscode
vtable
waitlist
wasn
weakt
WeatherForecast
WebSocket
whitespace
wildcard
wildcards
workflow
workspace
workspaces
Workspaces
wouldn
writeln
WriteMessage
xpression
yyyy
ZipImpl

99
ci/spellcheck.sh Executable file
View File

@@ -0,0 +1,99 @@
#!/bin/bash
aspell --version
# Checks project Markdown files for spelling mistakes.
# Notes:
# This script needs dictionary file ($dict_filename) with project-specific
# valid words. If this file is missing, first invocation of a script generates
# a file of words considered typos at the moment. User should remove real typos
# from this file and leave only valid words. When script generates false
# positive after source modification, new valid word should be added
# to dictionary file.
# Default mode of this script is interactive. Each source file is scanned for
# typos. aspell opens window, suggesting fixes for each found typo. Original
# files with errors will be backed up to files with format "filename.md.bak".
# When running in CI, this script should be run in "list" mode (pass "list"
# as first argument). In this mode script scans all files and reports found
# errors. Exit code in this case depends on scan result:
# 1 if any errors found,
# 0 if all is clear.
# Script skips words with length less than or equal to 3. This helps to avoid
# some false positives.
# We can consider skipping source code in markdown files (```code```) to reduce
# rate of false positives, but then we lose ability to detect typos in code
# comments/strings etc.
shopt -s nullglob
dict_filename=./ci/dictionary.txt
markdown_sources=(./src/*.md)
mode="check"
# aspell repeatedly modifies the personal dictionary for some reason,
# so we should use a copy of our dictionary.
dict_path="/tmp/dictionary.txt"
if [[ "$1" == "list" ]]; then
mode="list"
fi
# Error if running in list (CI) mode and there isn't a dictionary file;
# creating one in CI won't do any good :(
if [[ "$mode" == "list" && ! -f "$dict_filename" ]]; then
echo "No dictionary file found! A dictionary file is required in CI!"
exit 1
fi
if [[ ! -f "$dict_filename" ]]; then
# Pre-check mode: generates dictionary of words aspell consider typos.
# After user validates that this file contains only valid words, we can
# look for typos using this dictionary and some default aspell dictionary.
echo "Scanning files to generate dictionary file '$dict_filename'."
echo "Please check that it doesn't contain any misspellings."
echo "personal_ws-1.1 en 0 utf-8" > "$dict_filename"
cat "${markdown_sources[@]}" | aspell --ignore 3 list | sort -u >> "$dict_filename"
elif [[ "$mode" == "list" ]]; then
# List (default) mode: scan all files, report errors.
declare -i retval=0
cp "$dict_filename" "$dict_path"
if [ ! -f $dict_path ]; then
retval=1
exit "$retval"
fi
for fname in "${markdown_sources[@]}"; do
command=$(aspell --ignore 3 --personal="$dict_path" "$mode" < "$fname")
if [[ -n "$command" ]]; then
for error in $command; do
# FIXME: find more correct way to get line number
# (ideally from aspell). Now it can make some false positives,
# because it is just a grep.
grep --with-filename --line-number --color=always "$error" "$fname"
done
retval=1
fi
done
exit "$retval"
elif [[ "$mode" == "check" ]]; then
# Interactive mode: fix typos.
cp "$dict_filename" "$dict_path"
if [ ! -f $dict_path ]; then
retval=1
exit "$retval"
fi
for fname in "${markdown_sources[@]}"; do
aspell --ignore 3 --dont-backup --personal="$dict_path" "$mode" "$fname"
done
fi

4
ci/validate.sh Normal file
View File

@@ -0,0 +1,4 @@
for file in src/*.md ; do
echo Checking references in $file
cargo run --quiet --bin link2print < $file > /dev/null
done

1
rustfmt.toml Normal file
View File

@@ -0,0 +1 @@
max_width = 80

34
style-guide.md Normal file
View File

@@ -0,0 +1,34 @@
# Style Guide
## Prose
* Prefer title case for chapter/section headings, ex: `## Generating a Secret
Number` rather than `## Generating a secret number`.
* Prefer italics over single quotes when calling out a term, ex: `is an
*associated function* of` rather than `is an associated function of`.
* When talking about a method in prose, DO NOT include the parentheses, ex:
`read_line` rather than `read_line()`.
* Hard wrap at 80 chars
* Prefer not mixing code and not-code in one word, ex: ``Remember when we wrote
`use std::io`?`` rather than ``Remember when we `use`d `std::io`?``
## Code
* Add the file name before markdown blocks to make it clear which file we're
talking about, when applicable.
* When making changes to code, make it clear which parts of the code changed
and which stayed the same... not sure how to do this yet
* Split up long lines as appropriate to keep them under 80 chars if possible
* Use `bash` syntax highlighting for command line output code blocks
## Links
Once all the scripts are done:
* If a link shouldn't be printed, mark it to be ignored
* This includes all "Chapter XX" intra-book links, which *should* be links
for the HTML version
* Make intra-book links and stdlib API doc links relative so they work whether
the book is read offline or on docs.rust-lang.org
* Use markdown links and keep in mind that they will be changed into `text at
*url*` in print, so word them in a way that it reads well in that format

9
theme/2020-edition.css Normal file
View File

@@ -0,0 +1,9 @@
span.caption {
font-size: .8em;
font-weight: 600;
}
span.caption code {
font-size: 0.875em;
font-weight: 400;
}

13
tools/convert-quotes.sh Executable file
View File

@@ -0,0 +1,13 @@
#!/bin/bash
set -eu
dir=$1
mkdir -p "tmp/$dir"
for f in $dir/*.md
do
cat "$f" | cargo run --bin convert_quotes > "tmp/$f"
mv "tmp/$f" "$f"
done

20
tools/doc-to-md.sh Executable file
View File

@@ -0,0 +1,20 @@
#!/bin/bash
set -eu
# Get all the docx files in the tmp dir.
ls tmp/*.docx | \
# Extract just the filename so we can reuse it easily.
xargs -n 1 basename -s .docx | \
while IFS= read -r filename; do
# Make a directory to put the XML in.
mkdir -p "tmp/$filename"
# Unzip the docx to get at the XML.
unzip -o "tmp/$filename.docx" -d "tmp/$filename"
# Convert to markdown with XSL.
xsltproc tools/docx-to-md.xsl "tmp/$filename/word/document.xml" | \
# Hard wrap at 80 chars at word bourdaries.
fold -w 80 -s | \
# Remove trailing whitespace and save in the `nostarch` dir for comparison.
sed -e "s/ *$//" > "nostarch/$filename.md"
done

220
tools/docx-to-md.xsl Normal file
View File

@@ -0,0 +1,220 @@
<?xml version="1.0"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml">
<xsl:output method="text" />
<xsl:template match="/">
<xsl:apply-templates select="/w:document/w:body/*" />
</xsl:template>
<!-- Ignore these -->
<xsl:template match="w:p[starts-with(w:pPr/w:pStyle/@w:val, 'TOC')]" />
<xsl:template match="w:p[starts-with(w:pPr/w:pStyle/@w:val, 'Contents1')]" />
<xsl:template match="w:p[starts-with(w:pPr/w:pStyle/@w:val, 'Contents2')]" />
<xsl:template match="w:p[starts-with(w:pPr/w:pStyle/@w:val, 'Contents3')]" />
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'ChapterStart']" />
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'Normal']" />
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'Standard']" />
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'AuthorQuery']" />
<xsl:template match="w:p[w:pPr[not(w:pStyle)]]" />
<!-- Paragraph styles -->
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'ChapterTitle']">
<xsl:text>&#10;[TOC]&#10;&#10;</xsl:text>
<xsl:text># </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'HeadA']">
<xsl:text>## </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'HeadB']">
<xsl:text>### </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'HeadC']">
<xsl:text>#### </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'HeadBox']">
<xsl:text>### </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'NumListA' or @w:val = 'NumListB']]">
<xsl:text>1. </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'NumListC']]">
<xsl:text>1. </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'BulletA' or @w:val = 'BulletB' or @w:val = 'ListPlainA' or @w:val = 'ListPlainB']]">
<xsl:text>* </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'BulletC' or @w:val = 'ListPlainC']]">
<xsl:text>* </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'SubBullet']]">
<xsl:text> * </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'BodyFirst' or @w:val = 'Body' or @w:val = 'BodyFirstBox' or @w:val = 'BodyBox' or @w:val = '1stPara']]">
<xsl:if test=".//w:t">
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:if>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'CodeA' or @w:val = 'CodeAWingding']]">
<xsl:text>```&#10;</xsl:text>
<!-- Don't apply Emphasis/etc templates in code blocks -->
<xsl:for-each select="w:r">
<xsl:value-of select="w:t" />
</xsl:for-each>
<xsl:text>&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'CodeB' or @w:val = 'CodeBWingding']]">
<!-- Don't apply Emphasis/etc templates in code blocks -->
<xsl:for-each select="w:r">
<xsl:value-of select="w:t" />
</xsl:for-each>
<xsl:text>&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'CodeC' or @w:val = 'CodeCWingding']]">
<!-- Don't apply Emphasis/etc templates in code blocks -->
<xsl:for-each select="w:r">
<xsl:value-of select="w:t" />
</xsl:for-each>
<xsl:text>&#10;```&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'CodeSingle']">
<xsl:text>```&#10;</xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;```&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'ProductionDirective']">
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'Caption' or @w:val = 'TableTitle' or @w:val = 'Caption1' or @w:val = 'Listing']]">
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'BlockQuote']]">
<xsl:text>> </xsl:text>
<xsl:apply-templates select="*" />
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle[@w:val = 'BlockText']]">
<xsl:text>&#10;</xsl:text>
<xsl:text>> </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p[w:pPr/w:pStyle/@w:val = 'Note']">
<xsl:text>> </xsl:text>
<xsl:apply-templates select="*" />
<xsl:text>&#10;&#10;</xsl:text>
</xsl:template>
<xsl:template match="w:p">
Unmatched: <xsl:value-of select="w:pPr/w:pStyle/@w:val" />
<xsl:text>
</xsl:text>
</xsl:template>
<!-- Character styles -->
<xsl:template match="w:r[w:rPr/w:rStyle[@w:val = 'Literal' or @w:val = 'LiteralBold' or @w:val = 'LiteralCaption' or @w:val = 'LiteralBox']]">
<xsl:choose>
<xsl:when test="normalize-space(w:t) != ''">
<xsl:if test="starts-with(w:t, ' ')">
<xsl:text> </xsl:text>
</xsl:if>
<xsl:text>`</xsl:text>
<xsl:value-of select="normalize-space(w:t)" />
<xsl:text>`</xsl:text>
<xsl:if test="substring(w:t, string-length(w:t)) = ' '">
<xsl:text> </xsl:text>
</xsl:if>
</xsl:when>
<xsl:when test="normalize-space(w:t) != w:t and w:t != ''">
<xsl:text> </xsl:text>
</xsl:when>
</xsl:choose>
</xsl:template>
<xsl:template match="w:r[w:rPr/w:rStyle[@w:val = 'EmphasisBold']]">
<xsl:choose>
<xsl:when test="normalize-space(w:t) != ''">
<xsl:if test="starts-with(w:t, ' ')">
<xsl:text> </xsl:text>
</xsl:if>
<xsl:text>**</xsl:text>
<xsl:value-of select="normalize-space(w:t)" />
<xsl:text>**</xsl:text>
<xsl:if test="substring(w:t, string-length(w:t)) = ' '">
<xsl:text> </xsl:text>
</xsl:if>
</xsl:when>
<xsl:when test="normalize-space(w:t) != w:t and w:t != ''">
<xsl:text> </xsl:text>
</xsl:when>
</xsl:choose>
</xsl:template>
<xsl:template match="w:r[w:rPr/w:rStyle[@w:val = 'EmphasisItalic' or @w:val = 'EmphasisItalicBox' or @w:val = 'EmphasisNote' or @w:val = 'EmphasisRevCaption' or @w:val = 'EmphasisRevItal']]">
<xsl:choose>
<xsl:when test="normalize-space(w:t) != ''">
<xsl:if test="starts-with(w:t, ' ')">
<xsl:text> </xsl:text>
</xsl:if>
<xsl:text>*</xsl:text>
<xsl:value-of select="normalize-space(w:t)" />
<xsl:text>*</xsl:text>
<xsl:if test="substring(w:t, string-length(w:t)) = ' '">
<xsl:text> </xsl:text>
</xsl:if>
</xsl:when>
<xsl:otherwise>
<xsl:text> </xsl:text>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
<xsl:template match="w:r">
<xsl:value-of select="w:t" />
</xsl:template>
</xsl:stylesheet>

22
tools/megadiff.sh Executable file
View File

@@ -0,0 +1,22 @@
#!/bin/bash
set -eu
# Remove files that are never affected by rustfmt or are otherwise uninteresting
rm -rf tmp/book-before/css/ tmp/book-before/theme/ tmp/book-before/img/ tmp/book-before/*.js \
tmp/book-before/FontAwesome tmp/book-before/*.css tmp/book-before/*.png \
tmp/book-before/*.json tmp/book-before/print.html
rm -rf tmp/book-after/css/ tmp/book-after/theme/ tmp/book-after/img/ tmp/book-after/*.js \
tmp/book-after/FontAwesome tmp/book-after/*.css tmp/book-after/*.png \
tmp/book-after/*.json tmp/book-after/print.html
# Get all the html files before
ls tmp/book-before/*.html | \
# Extract just the filename so we can reuse it easily.
xargs -n 1 basename | \
while IFS= read -r filename; do
# Remove any files that are the same before and after
diff "tmp/book-before/$filename" "tmp/book-after/$filename" > /dev/null \
&& rm "tmp/book-before/$filename" "tmp/book-after/$filename"
done

27
tools/nostarch.sh Executable file
View File

@@ -0,0 +1,27 @@
#!/bin/bash
set -eu
cargo build --release
mkdir -p tmp
rm -rf tmp/*.md
rm -rf tmp/markdown
# Render the book as Markdown to include all the code listings
MDBOOK_OUTPUT__MARKDOWN=1 mdbook build -d tmp
# Get all the Markdown files
ls tmp/markdown/${1:-""}*.md | \
# Extract just the filename so we can reuse it easily.
xargs -n 1 basename | \
# Remove all links followed by `<!-- ignore -->``, then
# Change all remaining links from Markdown to italicized inline text.
while IFS= read -r filename; do
< "tmp/markdown/$filename" ./target/release/remove_links \
| ./target/release/link2print \
| ./target/release/remove_markup \
| ./target/release/remove_hidden_lines > "tmp/$filename"
done
# Concatenate the files into the `nostarch` dir.
./target/release/concat_chapters tmp nostarch

View File

@@ -0,0 +1,115 @@
#[macro_use]
extern crate lazy_static;
use std::collections::BTreeMap;
use std::env;
use std::fs::{create_dir, read_dir, File};
use std::io;
use std::io::{Read, Write};
use std::path::{Path, PathBuf};
use std::process::exit;
use regex::Regex;
static PATTERNS: &'static [(&'static str, &'static str)] = &[
(r"ch(\d\d)-\d\d-.*\.md", "chapter$1.md"),
(r"appendix-(\d\d).*\.md", "appendix.md"),
];
lazy_static! {
static ref MATCHERS: Vec<(Regex, &'static str)> = {
PATTERNS
.iter()
.map(|&(expr, repl)| (Regex::new(expr).unwrap(), repl))
.collect()
};
}
fn main() {
let args: Vec<String> = env::args().collect();
if args.len() < 3 {
println!("Usage: {} <src-dir> <target-dir>", args[0]);
exit(1);
}
let source_dir = ensure_dir_exists(&args[1]).unwrap();
let target_dir = ensure_dir_exists(&args[2]).unwrap();
let mut matched_files = match_files(source_dir, target_dir);
matched_files.sort();
for (target_path, source_paths) in group_by_target(matched_files) {
concat_files(source_paths, target_path).unwrap();
}
}
fn match_files(
source_dir: &Path,
target_dir: &Path,
) -> Vec<(PathBuf, PathBuf)> {
read_dir(source_dir)
.expect("Unable to read source directory")
.filter_map(|maybe_entry| maybe_entry.ok())
.filter_map(|entry| {
let source_filename = entry.file_name();
let source_filename =
&source_filename.to_string_lossy().into_owned();
for &(ref regex, replacement) in MATCHERS.iter() {
if regex.is_match(source_filename) {
let target_filename =
regex.replace_all(source_filename, replacement);
let source_path = entry.path();
let mut target_path = PathBuf::from(&target_dir);
target_path.push(target_filename.to_string());
return Some((source_path, target_path));
}
}
None
})
.collect()
}
fn group_by_target(
matched_files: Vec<(PathBuf, PathBuf)>,
) -> BTreeMap<PathBuf, Vec<PathBuf>> {
let mut grouped: BTreeMap<PathBuf, Vec<PathBuf>> = BTreeMap::new();
for (source, target) in matched_files {
if let Some(source_paths) = grouped.get_mut(&target) {
source_paths.push(source);
continue;
}
let source_paths = vec![source];
grouped.insert(target.clone(), source_paths);
}
grouped
}
fn concat_files(
source_paths: Vec<PathBuf>,
target_path: PathBuf,
) -> io::Result<()> {
println!("Concatenating into {}:", target_path.to_string_lossy());
let mut target = File::create(target_path)?;
target.write_all(b"\n[TOC]\n")?;
for path in source_paths {
println!(" {}", path.to_string_lossy());
let mut source = File::open(path)?;
let mut contents: Vec<u8> = Vec::new();
source.read_to_end(&mut contents)?;
target.write_all(b"\n")?;
target.write_all(&contents)?;
target.write_all(b"\n")?;
}
Ok(())
}
fn ensure_dir_exists(dir_string: &str) -> io::Result<&Path> {
let path = Path::new(dir_string);
if !path.exists() {
create_dir(path)?;
}
Ok(&path)
}

View File

@@ -0,0 +1,78 @@
use std::io;
use std::io::{Read, Write};
fn main() {
let mut is_in_code_block = false;
let mut is_in_inline_code = false;
let mut is_in_html_tag = false;
let mut buffer = String::new();
if let Err(e) = io::stdin().read_to_string(&mut buffer) {
panic!(e);
}
for line in buffer.lines() {
if line.is_empty() {
is_in_inline_code = false;
}
if line.starts_with("```") {
is_in_code_block = !is_in_code_block;
}
if is_in_code_block {
is_in_inline_code = false;
is_in_html_tag = false;
write!(io::stdout(), "{}\n", line).unwrap();
} else {
let modified_line = &mut String::new();
let mut previous_char = std::char::REPLACEMENT_CHARACTER;
let mut chars_in_line = line.chars();
while let Some(possible_match) = chars_in_line.next() {
// Check if inside inline code.
if possible_match == '`' {
is_in_inline_code = !is_in_inline_code;
}
// Check if inside HTML tag.
if possible_match == '<' && !is_in_inline_code {
is_in_html_tag = true;
}
if possible_match == '>' && !is_in_inline_code {
is_in_html_tag = false;
}
// Replace with right/left apostrophe/quote.
let char_to_push = if possible_match == '\''
&& !is_in_inline_code
&& !is_in_html_tag
{
if (previous_char != std::char::REPLACEMENT_CHARACTER
&& !previous_char.is_whitespace())
|| previous_char == ''
{
''
} else {
''
}
} else if possible_match == '"'
&& !is_in_inline_code
&& !is_in_html_tag
{
if (previous_char != std::char::REPLACEMENT_CHARACTER
&& !previous_char.is_whitespace())
|| previous_char == '“'
{
'”'
} else {
'“'
}
} else {
// Leave untouched.
possible_match
};
modified_line.push(char_to_push);
previous_char = char_to_push;
}
write!(io::stdout(), "{}\n", modified_line).unwrap();
}
}
}

252
tools/src/bin/lfp.rs Normal file
View File

@@ -0,0 +1,252 @@
// We have some long regex literals, so:
// ignore-tidy-linelength
use docopt::Docopt;
use serde::Deserialize;
use std::io::BufRead;
use std::{fs, io, path};
fn main() {
let args: Args = Docopt::new(USAGE)
.and_then(|d| d.deserialize())
.unwrap_or_else(|e| e.exit());
let src_dir = &path::Path::new(&args.arg_src_dir);
let found_errs = walkdir::WalkDir::new(src_dir)
.min_depth(1)
.into_iter()
.map(|entry| match entry {
Ok(entry) => entry,
Err(err) => {
eprintln!("{:?}", err);
std::process::exit(911)
}
})
.map(|entry| {
let path = entry.path();
if is_file_of_interest(path) {
let err_vec = lint_file(path);
for err in &err_vec {
match *err {
LintingError::LineOfInterest(line_num, ref line) => {
eprintln!(
"{}:{}\t{}",
path.display(),
line_num,
line
)
}
LintingError::UnableToOpenFile => {
eprintln!("Unable to open {}.", path.display())
}
}
}
!err_vec.is_empty()
} else {
false
}
})
.collect::<Vec<_>>()
.iter()
.any(|result| *result);
if found_errs {
std::process::exit(1)
} else {
std::process::exit(0)
}
}
const USAGE: &'static str = "
counter
Usage:
lfp <src-dir>
lfp (-h | --help)
Options:
-h --help Show this screen.
";
#[derive(Debug, Deserialize)]
struct Args {
arg_src_dir: String,
}
fn lint_file(path: &path::Path) -> Vec<LintingError> {
match fs::File::open(path) {
Ok(file) => lint_lines(io::BufReader::new(&file).lines()),
Err(_) => vec![LintingError::UnableToOpenFile],
}
}
fn lint_lines<I>(lines: I) -> Vec<LintingError>
where
I: Iterator<Item = io::Result<String>>,
{
lines
.enumerate()
.map(|(line_num, line)| {
let raw_line = line.unwrap();
if is_line_of_interest(&raw_line) {
Err(LintingError::LineOfInterest(line_num, raw_line))
} else {
Ok(())
}
})
.filter(|result| result.is_err())
.map(|result| result.unwrap_err())
.collect()
}
fn is_file_of_interest(path: &path::Path) -> bool {
path.extension().map_or(false, |ext| ext == "md")
}
fn is_line_of_interest(line: &str) -> bool {
!line
.split_whitespace()
.filter(|sub_string| {
sub_string.contains("file://")
&& !sub_string.contains("file:///projects/")
})
.collect::<Vec<_>>()
.is_empty()
}
#[derive(Debug)]
enum LintingError {
UnableToOpenFile,
LineOfInterest(usize, String),
}
#[cfg(test)]
mod tests {
use std::path;
#[test]
fn lint_file_returns_a_vec_with_errs_when_lines_of_interest_are_found() {
let string = r#"
$ cargo run
Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game)
Running `target/guessing_game`
Guess the number!
The secret number is: 61
Please input your guess.
10
You guessed: 10
Too small!
Please input your guess.
99
You guessed: 99
Too big!
Please input your guess.
foo
Please input your guess.
61
You guessed: 61
You win!
$ cargo run
Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game)
Running `target/debug/guessing_game`
Guess the number!
The secret number is: 7
Please input your guess.
4
You guessed: 4
$ cargo run
Running `target/debug/guessing_game`
Guess the number!
The secret number is: 83
Please input your guess.
5
$ cargo run
Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game)
Running `target/debug/guessing_game`
Hello, world!
"#;
let raw_lines = string.to_string();
let lines = raw_lines.lines().map(|line| Ok(line.to_string()));
let result_vec = super::lint_lines(lines);
assert!(!result_vec.is_empty());
assert_eq!(3, result_vec.len());
}
#[test]
fn lint_file_returns_an_empty_vec_when_no_lines_of_interest_are_found() {
let string = r#"
$ cargo run
Compiling guessing_game v0.1.0 (file:///projects/guessing_game)
Running `target/guessing_game`
Guess the number!
The secret number is: 61
Please input your guess.
10
You guessed: 10
Too small!
Please input your guess.
99
You guessed: 99
Too big!
Please input your guess.
foo
Please input your guess.
61
You guessed: 61
You win!
"#;
let raw_lines = string.to_string();
let lines = raw_lines.lines().map(|line| Ok(line.to_string()));
let result_vec = super::lint_lines(lines);
assert!(result_vec.is_empty());
}
#[test]
fn is_file_of_interest_returns_false_when_the_path_is_a_directory() {
let uninteresting_fn = "src/img";
assert!(!super::is_file_of_interest(path::Path::new(
uninteresting_fn
)));
}
#[test]
fn is_file_of_interest_returns_false_when_the_filename_does_not_have_the_md_extension(
) {
let uninteresting_fn = "src/img/foo1.png";
assert!(!super::is_file_of_interest(path::Path::new(
uninteresting_fn
)));
}
#[test]
fn is_file_of_interest_returns_true_when_the_filename_has_the_md_extension()
{
let interesting_fn = "src/ch01-00-introduction.md";
assert!(super::is_file_of_interest(path::Path::new(interesting_fn)));
}
#[test]
fn is_line_of_interest_does_not_report_a_line_if_the_line_contains_a_file_url_which_is_directly_followed_by_the_project_path(
) {
let sample_line =
"Compiling guessing_game v0.1.0 (file:///projects/guessing_game)";
assert!(!super::is_line_of_interest(sample_line));
}
#[test]
fn is_line_of_interest_reports_a_line_if_the_line_contains_a_file_url_which_is_not_directly_followed_by_the_project_path(
) {
let sample_line = "Compiling guessing_game v0.1.0 (file:///home/you/projects/guessing_game)";
assert!(super::is_line_of_interest(sample_line));
}
}

415
tools/src/bin/link2print.rs Normal file
View File

@@ -0,0 +1,415 @@
// FIXME: we have some long lines that could be refactored, but it's not a big deal.
// ignore-tidy-linelength
use regex::{Captures, Regex};
use std::collections::HashMap;
use std::io;
use std::io::{Read, Write};
fn main() {
write_md(parse_links(parse_references(read_md())));
}
fn read_md() -> String {
let mut buffer = String::new();
match io::stdin().read_to_string(&mut buffer) {
Ok(_) => buffer,
Err(error) => panic!(error),
}
}
fn write_md(output: String) {
write!(io::stdout(), "{}", output).unwrap();
}
fn parse_references(buffer: String) -> (String, HashMap<String, String>) {
let mut ref_map = HashMap::new();
// FIXME: currently doesn't handle "title" in following line.
let re = Regex::new(r###"(?m)\n?^ {0,3}\[([^]]+)\]:[[:blank:]]*(.*)$"###)
.unwrap();
let output = re.replace_all(&buffer, |caps: &Captures<'_>| {
let key = caps.get(1).unwrap().as_str().to_uppercase();
let val = caps.get(2).unwrap().as_str().to_string();
if ref_map.insert(key, val).is_some() {
panic!("Did not expect markdown page to have duplicate reference");
}
"".to_string()
}).to_string();
(output, ref_map)
}
fn parse_links((buffer, ref_map): (String, HashMap<String, String>)) -> String {
// FIXME: check which punctuation is allowed by spec.
let re = Regex::new(r###"(?:(?P<pre>(?:```(?:[^`]|`[^`])*`?\n```\n)|(?:[^\[]`[^`\n]+[\n]?[^`\n]*`))|(?:\[(?P<name>[^]]+)\](?:(?:\([[:blank:]]*(?P<val>[^")]*[^ ])(?:[[:blank:]]*"[^"]*")?\))|(?:\[(?P<key>[^]]*)\]))?))"###).expect("could not create regex");
let error_code =
Regex::new(r###"^E\d{4}$"###).expect("could not create regex");
let output = re.replace_all(&buffer, |caps: &Captures<'_>| {
match caps.name("pre") {
Some(pre_section) => format!("{}", pre_section.as_str()),
None => {
let name = caps.name("name").expect("could not get name").as_str();
// Really we should ignore text inside code blocks,
// this is a hack to not try to treat `#[derive()]`,
// `[profile]`, `[test]`, or `[E\d\d\d\d]` like a link.
if name.starts_with("derive(") ||
name.starts_with("profile") ||
name.starts_with("test") ||
name.starts_with("no_mangle") ||
error_code.is_match(name) {
return name.to_string()
}
let val = match caps.name("val") {
// `[name](link)`
Some(value) => value.as_str().to_string(),
None => {
match caps.name("key") {
Some(key) => {
match key.as_str() {
// `[name][]`
"" => format!("{}", ref_map.get(&name.to_uppercase()).expect(&format!("could not find url for the link text `{}`", name))),
// `[name][reference]`
_ => format!("{}", ref_map.get(&key.as_str().to_uppercase()).expect(&format!("could not find url for the link text `{}`", key.as_str()))),
}
}
// `[name]` as reference
None => format!("{}", ref_map.get(&name.to_uppercase()).expect(&format!("could not find url for the link text `{}`", name))),
}
}
};
format!("{} at *{}*", name, val)
}
}
});
output.to_string()
}
#[cfg(test)]
mod tests {
fn parse(source: String) -> String {
super::parse_links(super::parse_references(source))
}
#[test]
fn parses_inline_link() {
let source =
r"This is a [link](http://google.com) that should be expanded"
.to_string();
let target =
r"This is a link at *http://google.com* that should be expanded"
.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_multiline_links() {
let source = r"This is a [link](http://google.com) that
should appear expanded. Another [location](/here/) and [another](http://gogogo)"
.to_string();
let target = r"This is a link at *http://google.com* that
should appear expanded. Another location at */here/* and another at *http://gogogo*"
.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_reference() {
let source = r"This is a [link][theref].
[theref]: http://example.com/foo
more text"
.to_string();
let target = r"This is a link at *http://example.com/foo*.
more text"
.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_implicit_link() {
let source = r"This is an [implicit][] link.
[implicit]: /The Link/"
.to_string();
let target = r"This is an implicit at */The Link/* link.".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_refs_with_one_space_indentation() {
let source = r"This is a [link][ref]
[ref]: The link"
.to_string();
let target = r"This is a link at *The link*".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_refs_with_two_space_indentation() {
let source = r"This is a [link][ref]
[ref]: The link"
.to_string();
let target = r"This is a link at *The link*".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_refs_with_three_space_indentation() {
let source = r"This is a [link][ref]
[ref]: The link"
.to_string();
let target = r"This is a link at *The link*".to_string();
assert_eq!(parse(source), target);
}
#[test]
#[should_panic]
fn rejects_refs_with_four_space_indentation() {
let source = r"This is a [link][ref]
[ref]: The link"
.to_string();
let target = r"This is a link at *The link*".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn ignores_optional_inline_title() {
let source =
r###"This is a titled [link](http://example.com "My title")."###
.to_string();
let target =
r"This is a titled link at *http://example.com*.".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_title_with_puctuation() {
let source =
r###"[link](http://example.com "It's Title")"###.to_string();
let target = r"link at *http://example.com*".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_name_with_punctuation() {
let source = r###"[I'm here](there)"###.to_string();
let target = r###"I'm here at *there*"###.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_name_with_utf8() {
let source = r###"[users forum](the users forum)"###.to_string();
let target =
r###"users forum at *the users forum*"###.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_reference_with_punctuation() {
let source = r###"[link][the ref-ref]
[the ref-ref]:http://example.com/ref-ref"###
.to_string();
let target = r###"link at *http://example.com/ref-ref*"###.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_reference_case_insensitively() {
let source = r"[link][Ref]
[ref]: The reference"
.to_string();
let target = r"link at *The reference*".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_link_as_reference_when_reference_is_empty() {
let source = r"[link as reference][]
[link as reference]: the actual reference"
.to_string();
let target = r"link as reference at *the actual reference*".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_link_without_reference_as_reference() {
let source = r"[link] is alone
[link]: The contents"
.to_string();
let target = r"link at *The contents* is alone".to_string();
assert_eq!(parse(source), target);
}
#[test]
#[ignore]
fn parses_link_without_reference_as_reference_with_asterisks() {
let source = r"*[link]* is alone
[link]: The contents"
.to_string();
let target = r"*link* at *The contents* is alone".to_string();
assert_eq!(parse(source), target);
}
#[test]
fn ignores_links_in_pre_sections() {
let source = r###"```toml
[package]
name = "hello_cargo"
version = "0.1.0"
authors = ["Your Name <you@example.com>"]
[dependencies]
```
"###
.to_string();
let target = source.clone();
assert_eq!(parse(source), target);
}
#[test]
fn ignores_links_in_quoted_sections() {
let source = r###"do not change `[package]`."###.to_string();
let target = source.clone();
assert_eq!(parse(source), target);
}
#[test]
fn ignores_links_in_quoted_sections_containing_newlines() {
let source = r"do not change `this [package]
is still here` [link](ref)"
.to_string();
let target = r"do not change `this [package]
is still here` link at *ref*"
.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn ignores_links_in_pre_sections_while_still_handling_links() {
let source = r###"```toml
[package]
name = "hello_cargo"
version = "0.1.0"
authors = ["Your Name <you@example.com>"]
[dependencies]
```
Another [link]
more text
[link]: http://gohere
"###
.to_string();
let target = r###"```toml
[package]
name = "hello_cargo"
version = "0.1.0"
authors = ["Your Name <you@example.com>"]
[dependencies]
```
Another link at *http://gohere*
more text
"###
.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn ignores_quotes_in_pre_sections() {
let source = r###"```bash
$ cargo build
Compiling guessing_game v0.1.0 (file:///projects/guessing_game)
src/main.rs:23:21: 23:35 error: mismatched types [E0308]
src/main.rs:23 match guess.cmp(&secret_number) {
^~~~~~~~~~~~~~
src/main.rs:23:21: 23:35 help: run `rustc --explain E0308` to see a detailed explanation
src/main.rs:23:21: 23:35 note: expected type `&std::string::String`
src/main.rs:23:21: 23:35 note: found type `&_`
error: aborting due to previous error
Could not compile `guessing_game`.
```
"###
.to_string();
let target = source.clone();
assert_eq!(parse(source), target);
}
#[test]
fn ignores_short_quotes() {
let source = r"to `1` at index `[0]` i".to_string();
let target = source.clone();
assert_eq!(parse(source), target);
}
#[test]
fn ignores_pre_sections_with_final_quote() {
let source = r###"```bash
$ cargo run
Compiling points v0.1.0 (file:///projects/points)
error: the trait bound `Point: std::fmt::Display` is not satisfied [--explain E0277]
--> src/main.rs:8:29
8 |> println!("Point 1: {}", p1);
|> ^^
<std macros>:2:27: 2:58: note: in this expansion of format_args!
<std macros>:3:1: 3:54: note: in this expansion of print! (defined in <std macros>)
src/main.rs:8:5: 8:33: note: in this expansion of println! (defined in <std macros>)
note: `Point` cannot be formatted with the default formatter; try using `:?` instead if you are using a format string
note: required by `std::fmt::Display::fmt`
```
`here` is another [link](the ref)
"###.to_string();
let target = r###"```bash
$ cargo run
Compiling points v0.1.0 (file:///projects/points)
error: the trait bound `Point: std::fmt::Display` is not satisfied [--explain E0277]
--> src/main.rs:8:29
8 |> println!("Point 1: {}", p1);
|> ^^
<std macros>:2:27: 2:58: note: in this expansion of format_args!
<std macros>:3:1: 3:54: note: in this expansion of print! (defined in <std macros>)
src/main.rs:8:5: 8:33: note: in this expansion of println! (defined in <std macros>)
note: `Point` cannot be formatted with the default formatter; try using `:?` instead if you are using a format string
note: required by `std::fmt::Display::fmt`
```
`here` is another link at *the ref*
"###.to_string();
assert_eq!(parse(source), target);
}
#[test]
fn parses_adam_p_cheatsheet() {
let source = r###"[I'm an inline-style link](https://www.google.com)
[I'm an inline-style link with title](https://www.google.com "Google's Homepage")
[I'm a reference-style link][Arbitrary case-insensitive reference text]
[I'm a relative reference to a repository file](../blob/master/LICENSE)
[You can use numbers for reference-style link definitions][1]
Or leave it empty and use the [link text itself][].
URLs and URLs in angle brackets will automatically get turned into links.
http://www.example.com or <http://www.example.com> and sometimes
example.com (but not on Github, for example).
Some text to show that the reference links can follow later.
[arbitrary case-insensitive reference text]: https://www.mozilla.org
[1]: http://slashdot.org
[link text itself]: http://www.reddit.com"###
.to_string();
let target = r###"I'm an inline-style link at *https://www.google.com*
I'm an inline-style link with title at *https://www.google.com*
I'm a reference-style link at *https://www.mozilla.org*
I'm a relative reference to a repository file at *../blob/master/LICENSE*
You can use numbers for reference-style link definitions at *http://slashdot.org*
Or leave it empty and use the link text itself at *http://www.reddit.com*.
URLs and URLs in angle brackets will automatically get turned into links.
http://www.example.com or <http://www.example.com> and sometimes
example.com (but not on Github, for example).
Some text to show that the reference links can follow later.
"###
.to_string();
assert_eq!(parse(source), target);
}
}

View File

@@ -0,0 +1,159 @@
#[macro_use]
extern crate lazy_static;
use regex::Regex;
use std::error::Error;
use std::fs;
use std::fs::File;
use std::io::prelude::*;
use std::io::{BufReader, BufWriter};
use std::path::{Path, PathBuf};
fn main() -> Result<(), Box<dyn Error>> {
// Get all listings from the `listings` directory
let listings_dir = Path::new("listings");
// Put the results in the `tmp/listings` directory
let out_dir = Path::new("tmp/listings");
// Clear out any existing content in `tmp/listings`
if out_dir.is_dir() {
fs::remove_dir_all(out_dir)?;
}
// Create a new, empty `tmp/listings` directory
fs::create_dir(out_dir)?;
// For each chapter in the `listings` directory,
for chapter in fs::read_dir(listings_dir)? {
let chapter = chapter?;
let chapter_path = chapter.path();
let chapter_name = chapter_path
.file_name()
.expect("Chapter should've had a name");
// Create a corresponding chapter dir in `tmp/listings`
let output_chapter_path = out_dir.join(chapter_name);
fs::create_dir(&output_chapter_path)?;
// For each listing in the chapter directory,
for listing in fs::read_dir(chapter_path)? {
let listing = listing?;
let listing_path = listing.path();
let listing_name = listing_path
.file_name()
.expect("Listing should've had a name");
// Create a corresponding listing dir in the tmp chapter dir
let output_listing_dir = output_chapter_path.join(listing_name);
fs::create_dir(&output_listing_dir)?;
// Copy all the cleaned files in the listing to the tmp directory
copy_cleaned_listing_files(listing_path, output_listing_dir)?;
}
}
// Create a compressed archive of all the listings
let tarfile = File::create("tmp/listings.tar.gz")?;
let encoder =
flate2::write::GzEncoder::new(tarfile, flate2::Compression::default());
let mut archive = tar::Builder::new(encoder);
archive.append_dir_all("listings", "tmp/listings")?;
// Assure whoever is running this that the script exiting successfully, and remind them
// where the generated file ends up
println!("Release tarball of listings in tmp/listings.tar.gz");
Ok(())
}
// Cleaned listings will not contain:
//
// - `target` directories
// - `output.txt` files used to display output in the book
// - `rustfmt-ignore` files used to signal to update-rustc.sh the listing shouldn't be formatted
// - anchor comments or snip comments
// - empty `main` functions in `lib.rs` files used to trick rustdoc
fn copy_cleaned_listing_files(
from: PathBuf,
to: PathBuf,
) -> Result<(), Box<dyn Error>> {
for item in fs::read_dir(from)? {
let item = item?;
let item_path = item.path();
let item_name =
item_path.file_name().expect("Item should've had a name");
let output_item = to.join(item_name);
if item_path.is_dir() {
// Don't copy `target` directories
if item_name != "target" {
fs::create_dir(&output_item)?;
copy_cleaned_listing_files(item_path, output_item)?;
}
} else {
// Don't copy output files or files that tell update-rustc.sh not to format
if item_name != "output.txt" && item_name != "rustfmt-ignore" {
let item_extension = item_path.extension();
if item_extension.is_some() && item_extension.unwrap() == "rs" {
copy_cleaned_rust_file(
item_name,
&item_path,
&output_item,
)?;
} else {
// Copy any non-Rust files without modification
fs::copy(item_path, output_item)?;
}
}
}
}
Ok(())
}
lazy_static! {
static ref ANCHOR_OR_SNIP_COMMENTS: Regex = Regex::new(
r"(?x)
//\s*ANCHOR:\s*[\w_-]+ # Remove all anchor comments
|
//\s*ANCHOR_END:\s*[\w_-]+ # Remove all anchor ending comments
|
//\s*--snip-- # Remove all snip comments
"
)
.unwrap();
}
lazy_static! {
static ref EMPTY_MAIN: Regex = Regex::new(r"fn main\(\) \{}").unwrap();
}
// Cleaned Rust files will not contain:
//
// - anchor comments or snip comments
// - empty `main` functions in `lib.rs` files used to trick rustdoc
fn copy_cleaned_rust_file(
item_name: &std::ffi::OsStr,
from: &PathBuf,
to: &PathBuf,
) -> Result<(), Box<dyn Error>> {
let from_buf = BufReader::new(File::open(from)?);
let mut to_buf = BufWriter::new(File::create(to)?);
for line in from_buf.lines() {
let line = line?;
if !ANCHOR_OR_SNIP_COMMENTS.is_match(&line) {
if item_name != "lib.rs" || !EMPTY_MAIN.is_match(&line) {
writeln!(&mut to_buf, "{}", line)?;
}
}
}
to_buf.flush()?;
Ok(())
}

View File

@@ -0,0 +1,83 @@
use std::io;
use std::io::prelude::*;
fn main() {
write_md(remove_hidden_lines(&read_md()));
}
fn read_md() -> String {
let mut buffer = String::new();
match io::stdin().read_to_string(&mut buffer) {
Ok(_) => buffer,
Err(error) => panic!(error),
}
}
fn write_md(output: String) {
write!(io::stdout(), "{}", output).unwrap();
}
fn remove_hidden_lines(input: &str) -> String {
let mut resulting_lines = vec![];
let mut within_codeblock = false;
for line in input.lines() {
if line.starts_with("```") {
within_codeblock = !within_codeblock;
}
if !within_codeblock || (!line.starts_with("# ") && line != "#") {
resulting_lines.push(line)
}
}
resulting_lines.join("\n")
}
#[cfg(test)]
mod tests {
use crate::remove_hidden_lines;
#[test]
fn hidden_line_in_code_block_is_removed() {
let input = r#"
In this listing:
```
fn main() {
# secret
}
```
you can see that...
"#;
let output = remove_hidden_lines(input);
let desired_output = r#"
In this listing:
```
fn main() {
}
```
you can see that...
"#;
assert_eq!(output, desired_output);
}
#[test]
fn headings_arent_removed() {
let input = r#"
# Heading 1
"#;
let output = remove_hidden_lines(input);
let desired_output = r#"
# Heading 1
"#;
assert_eq!(output, desired_output);
}
}

View File

@@ -0,0 +1,45 @@
extern crate regex;
use regex::{Captures, Regex};
use std::collections::HashSet;
use std::io;
use std::io::{Read, Write};
fn main() {
let mut buffer = String::new();
if let Err(e) = io::stdin().read_to_string(&mut buffer) {
panic!(e);
}
let mut refs = HashSet::new();
// Capture all links and link references.
let regex =
r"\[([^\]]+)\](?:(?:\[([^\]]+)\])|(?:\([^\)]+\)))(?i)<!--\signore\s-->";
let link_regex = Regex::new(regex).unwrap();
let first_pass = link_regex.replace_all(&buffer, |caps: &Captures<'_>| {
// Save the link reference we want to delete.
if let Some(reference) = caps.get(2) {
refs.insert(reference.as_str().to_string());
}
// Put the link title back.
caps.get(1).unwrap().as_str().to_string()
});
// Search for the references we need to delete.
let ref_regex = Regex::new(r"(?m)^\[([^\]]+)\]:\s.*\n").unwrap();
let out = ref_regex.replace_all(&first_pass, |caps: &Captures<'_>| {
let capture = caps.get(1).unwrap().to_owned();
// Check if we've marked this reference for deletion ...
if refs.contains(capture.as_str()) {
return "".to_string();
}
// ... else we put back everything we captured.
caps.get(0).unwrap().as_str().to_string()
});
write!(io::stdout(), "{}", out).unwrap();
}

View File

@@ -0,0 +1,51 @@
extern crate regex;
use regex::{Captures, Regex};
use std::io;
use std::io::{Read, Write};
fn main() {
write_md(remove_markup(read_md()));
}
fn read_md() -> String {
let mut buffer = String::new();
match io::stdin().read_to_string(&mut buffer) {
Ok(_) => buffer,
Err(error) => panic!(error),
}
}
fn write_md(output: String) {
write!(io::stdout(), "{}", output).unwrap();
}
fn remove_markup(input: String) -> String {
let filename_regex =
Regex::new(r#"\A<span class="filename">(.*)</span>\z"#).unwrap();
// Captions sometimes take up multiple lines.
let caption_start_regex =
Regex::new(r#"\A<span class="caption">(.*)\z"#).unwrap();
let caption_end_regex = Regex::new(r#"(.*)</span>\z"#).unwrap();
let regexen = vec![filename_regex, caption_start_regex, caption_end_regex];
let lines: Vec<_> = input
.lines()
.flat_map(|line| {
// Remove our syntax highlighting and rustdoc markers.
if line.starts_with("```") {
Some(String::from("```"))
// Remove the span around filenames and captions.
} else {
let result =
regexen.iter().fold(line.to_string(), |result, regex| {
regex.replace_all(&result, |caps: &Captures<'_>| {
caps.get(1).unwrap().as_str().to_string()
}).to_string()
});
Some(result)
}
})
.collect();
lines.join("\n")
}

76
tools/update-rustc.sh Executable file
View File

@@ -0,0 +1,76 @@
#!/bin/bash
set -eu
# Build the book before making any changes for comparison of the output.
echo 'Building book into `tmp/book-before` before updating...'
mdbook build -d tmp/book-before
# Rustfmt all listings
echo 'Formatting all listings...'
find -s listings -name Cargo.toml -print0 | while IFS= read -r -d '' f; do
dir_to_fmt=$(dirname $f)
# There are a handful of listings we don't want to rustfmt and skipping doesn't work;
# those will have a file in their directory that explains why.
if [ ! -f "${dir_to_fmt}/rustfmt-ignore" ]; then
cd $dir_to_fmt
cargo fmt --all && true
cd - > /dev/null
fi
done
# Get listings without anchor comments in tmp by compiling a release listings artifact
echo 'Generate listings without anchor comments...'
cargo run --bin release_listings
root_dir=$(pwd)
echo 'Regenerating output...'
# For any listings where we show the output,
find -s listings -name output.txt -print0 | while IFS= read -r -d '' f; do
build_directory=$(dirname $f)
full_build_directory="${root_dir}/${build_directory}"
full_output_path="${full_build_directory}/output.txt"
tmp_build_directory="tmp/${build_directory}"
cd $tmp_build_directory
# Save the previous compile time; we're going to keep it to minimize diff churn
compile_time=$(sed -E -ne "s/.*Finished (dev|test) \[unoptimized \+ debuginfo] target\(s\) in ([0-9.]*).*/\2/p" ${full_output_path})
# Act like this is the first time this listing has been built
cargo clean
# Run the command in the existing output file
cargo_command=$(sed -ne "s/$ \(.*\)/\1/p" ${full_output_path})
# Clear the output file of everything except the command
echo "$ ${cargo_command}" > ${full_output_path}
# Regenerate the output and append to the output file. Turn some warnings
# off to reduce output noise, and use one test thread to get consistent
# ordering of tests in the output when the command is `cargo test`.
RUSTFLAGS="-A unused_variables -A dead_code" RUST_TEST_THREADS=1 $cargo_command >> ${full_output_path} 2>&1 || true
# Set the project file path to the projects directory plus the crate name instead of a path
# to the computer of whoever is running this
sed -i '' -E -e "s/(Compiling|Checking) ([^\)]*) v0.1.0 (.*)/\1 \2 v0.1.0 (file:\/\/\/projects\/\2)/" ${full_output_path}
# Restore the previous compile time, if there is one
if [ -n "${compile_time}" ]; then
sed -i '' -E -e "s/Finished (dev|test) \[unoptimized \+ debuginfo] target\(s\) in [0-9.]*/Finished \1 [unoptimized + debuginfo] target(s) in ${compile_time}/" ${full_output_path}
fi
cd - > /dev/null
done
# Build the book after making all the changes
echo 'Building book into `tmp/book-after` after updating...'
mdbook build -d tmp/book-after
# Run the megadiff script that removes all files that are the same, leaving only files to audit
echo 'Removing tmp files that had no changes from the update...'
./tools/megadiff.sh
echo 'Done.'

33
workflows/CI/badge.svg Normal file
View File

@@ -0,0 +1,33 @@
<svg xmlns="http://www.w3.org/2000/svg" width="90" height="20">
<defs>
<linearGradient id="workflow-fill" x1="50%" y1="0%" x2="50%" y2="100%">
<stop stop-color="#444D56" offset="0%"/>
<stop stop-color="#24292E" offset="100%"/>
</linearGradient>
<linearGradient id="state-fill" x1="50%" y1="0%" x2="50%" y2="100%">
<stop stop-color="#34D058" offset="0%"/>
<stop stop-color="#28A745" offset="100%"/>
</linearGradient>
</defs>
<g fill="none" fill-rule="evenodd">
<g font-family="'DejaVu Sans',Verdana,Geneva,sans-serif" font-size="11">
<path id="workflow-bg" d="M0,3 C0,1.3431 1.3552,0 3.02702703,0 L40,0 L40,20 L3.02702703,20 C1.3552,20 0,18.6569 0,17 L0,3 Z" fill="url(#workflow-fill)" fill-rule="nonzero"/>
<text fill="#010101" fill-opacity=".3">
<tspan x="22.1981982" y="15">CI</tspan>
</text>
<text fill="#FFFFFF">
<tspan x="22.1981982" y="14">CI</tspan>
</text>
</g>
<g transform="translate(40)" font-family="'DejaVu Sans',Verdana,Geneva,sans-serif" font-size="11">
<path d="M0 0h46.939C48.629 0 50 1.343 50 3v14c0 1.657-1.37 3-3.061 3H0V0z" id="state-bg" fill="url(#state-fill)" fill-rule="nonzero"/>
<text fill="#010101" fill-opacity=".3">
<tspan x="4" y="15">passing</tspan>
</text>
<text fill="#FFFFFF">
<tspan x="4" y="14">passing</tspan>
</text>
</g>
<path fill="#959DA5" d="M11 3c-3.868 0-7 3.132-7 7a6.996 6.996 0 0 0 4.786 6.641c.35.062.482-.148.482-.332 0-.166-.01-.718-.01-1.304-1.758.324-2.213-.429-2.353-.822-.079-.202-.42-.823-.717-.99-.245-.13-.595-.454-.01-.463.552-.009.946.508 1.077.718.63 1.058 1.636.76 2.039.577.061-.455.245-.761.446-.936-1.557-.175-3.185-.779-3.185-3.456 0-.762.271-1.392.718-1.882-.07-.175-.315-.892.07-1.855 0 0 .586-.183 1.925.718a6.5 6.5 0 0 1 1.75-.236 6.5 6.5 0 0 1 1.75.236c1.338-.91 1.925-.718 1.925-.718.385.963.14 1.68.07 1.855.446.49.717 1.112.717 1.882 0 2.686-1.636 3.28-3.194 3.456.254.219.473.639.473 1.295 0 .936-.009 1.689-.009 1.925 0 .184.131.402.481.332A7.011 7.011 0 0 0 18 10c0-3.867-3.133-7-7-7z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 2.1 KiB