Document not found (404)
+This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 000000000..f17311098 --- /dev/null +++ b/.nojekyll @@ -0,0 +1 @@ +This file makes sure that Github Pages doesn't process mdBook's output. diff --git a/404.html b/404.html new file mode 100644 index 000000000..e7d84fa8e --- /dev/null +++ b/404.html @@ -0,0 +1,171 @@ + + +
+ + +This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +The Rust Team likes to occasionally +recognize people who have made outstanding contributions to The Rust Project, +its ecosystem, and its community. These people are ‘Friends of the Tree’, +archived here for eternal glory.
+This week we would like to nominate @mitaa as Friend of the Tree. Recently @mitaa +has sent a wave of fixes to +rustdoc (yes those are all separate links) with even more on the way! +Rustdoc has historically been a tool in need of some love, and the extra help in +fixing bugs is especially appreciated. Thanks @mitaa!
+This week’s friend of the tree is Jeffrey Seyfried (@jseyfried)!
+Jeffrey Seyfried (@jseyfried) has made some awesome contributions to name +resolution. He has fixed a ton of bugs, reported previously unknown edge cases, +and done some big refactorings, all of which have helped improve a complex and +somewhat neglected part of the compiler.
+This week we’d like to nominate @petrochenkov for Friend of the Tree. Vadim has
+been doing some absolutely amazing compiler work recently such as fixing
+privacy bugs, fixing hygiene bugs, fixing pattern bugs,
+paving the way and implementing #[deprecated]
,
+fixing and closing many privacy holes, refactoring
+and improving the HIR, and reviving the old type ascription
+PR. The list of outstanding bugs and projects in the compiler is
+growing ever smaller now; thanks @petrochenkov!
In his own words, WindowsBunny is “a hopping encyclopedia of all the issues +windows users might run into and how to solve them.” One of the heroes that make +Rust work on Windows, he actively pushes the frontiers of what Rust can do on +the platform. He is also notably the maintainer of the +winapi family of crates, a comprehensive set +of bindings to the Windows system APIs. You da bunny, WindowsBunny. Also, a +friend of the tree.
+ +Today @nrc would like to nominated @marcusklaas as Friend of the Tree:
+Marcus is one of the primary authors of +rustfmt. He has been involved since the early +days and is now the +top contributor. He has +fixed innumerable bugs, implemented new features, reviewed a tonne of PRs, and +contributed to the design of the project. Rustfmt would not be the software it +is today without his hard work; he is indeed a Friend Of The Tree.
+nmatsakis would also like to declare Ryan Prichard a Friend of the Tree. +Over the last few months, Ryan has been comparing the Rust compiler’s parsing +behavior with that of the rust-grammar project, which aims to create a LALR(1) +grammar for parsing Rust. Ryan has found a number of inconsistencies and bugs +between the two. This kind of work is useful for two reasons: it finds bugs, +obviously, which are often hard to uncover any other way. Second, it helps pave +the way for a true Rust reference grammar outside of the compiler source itself. +So Ryan Prichard, thanks!
+Vikrant Chaudhary (nasa42) is an individual who +believes in the Rust community. Since June he has been contributing to +This Week in Rust, coordinating its publication +on +urlo, +and stirring up contributions. He recently rolled out an overhaul to the site’s +design that brings it more inline with the main website. Today Vikrant is the +main editor on the weekly newsletter, assisted by +llogiq and other contributors. Thanks for keeping +TWiR running, Vikrant, you friend of the tree.
+ +@Gankra has nominated +@tshepang for Friend of the Tree this week:
+Over the last year Tshepang has landed over 100 improvements to our +documentation. Tshepang saw where documentation was not, and said “No. This will +not do.”
+We should all endeavor to care about docs as much as Tshepang.
+ +I’d like to nominate Chris Morgan (@chris-morgan) for Friend of the Tree today. +Chris recently redesigned the play.rust-lang.org site for the 1.0 release, +giving the site a more modern and rustic feel to it. Chris has been contributing +to Rust for quite some time now, his first contribution dating back to July 2013 +and also being one of the early pioneers in the space of HTTP libraries written +in Rust. Chris truly is a friend of the tree!
+BurntSushi is an individual who practically needs no introduction. He’s +written many of the world’s most popular crates, including docopt.rs, regex, +quickcheck, cbor, and byteorder. Don’t forget his CSV swiss-army-knife, +xsv, built on rust-csv. Feedback from his early work on libraries helped +informed the evolution of Rust during a critical time in its development, and +BurntSushi continues to churn out the kind of Rust gems that can only come from +someone who is a skilled friendofthetree.
+Manish started working on Servo as part of the GSoC program in 2014, where he +implemented XMLHttpRequest. Since then he’s become in integral part of the Servo +team while finishing his university studies and organizing Rust community +events. In 2015 he took an interest in bors’ queue and started making rollup PRs +to accelerate the integration process. Nursing the PR queue is the kind of +time-consuming labor that creates friends of the tree like Manish, the rollup +friend of the tree.
+Today I would like to nominate Toby Scrace as Friend of the Tree. Toby emailed +me over the weekend about a login vulnerability on crates.io where you could log +in to whomever the previously logged in user was regardless of whether the +GitHub authentication was successful or not. I very much appreciate Toby +emailing me privately ahead of time, and I definitely feel that Toby has earned +becoming Friend of the Tree.
+Jonathan Reem has been making an impact on Rust since May 2014. His primary
+contribution has been as the main author of the prominent Iron web
+framework, though he has also created several other popular projects including
+the testing framework stainless. His practical experience with these projects
+has led to several improvements in upstream rust, most notably his complete
+rewrite of the TaskPool
type. Reem is doing everything he can to advance the
+Rust cause.
Today I would like to nominate Barosl Lee (@barosl) for Friend of the Tree. +Barosl has recently rewritten our bors cron job in a new project called homu. +Homu has a number of benefits including:
+Homu was recently deployed for rust-lang/rust thanks to a number of issues being +closed out by Barosl, and it’s been working fantastically so far! Barosl has +also been super responsive to any new issues cropping up. Barosl truly is a +Friend of the Tree!
+Seonghoon has been an active member of the Rust community since early 2013, and +although he has made a number of valuable contributions to Rust itself, his +greatest work has been in developing key libraries out of tree. rust-encoding, +one of the most popular crates in Cargo, performs character encoding, and +rust-chrono date / time handling, both of which fill critical holes in the +functionality of the standard library. rust-strconv is a prototype of +efficient numerical string conversions that is a candidate for future inclusion +in the standard library. He maintains a blog where he discusses his +work.
+I nominate Jorge Aparicio (japaric) for Friend of the Tree (for the second time, +no less!). japaric has done tremendous work porting the codebase to use the new +language features that are now available. First, he converted APIs in the +standard library to take full advantage of DST after it landed. Next, he +converted APIs to use unboxed closures. Then, he converted a large portion of +the libraries to use associated types. Finally, he removed boxed closures from +the compiler entirely. He has also worked to roll out RFCs changing the +overloaded operators and comparison traits, including both their definitions and +their impact on the standard library. And this list excludes a number of smaller +changes, like deprecating older syntax. The alpha release would not be where it +is without him; Japaric is simply one of the best friends the tree has ever had.
+This is a belated recognition of Kevin Ballard (aka @kballard, aka Eridius) as a +friend of the tree. Kevin put a lot of work into Unicode issues in Rust, +especially as related to platform-specific constraints. He wrote the current +path module in part to accommodate these constraints, and participated in the +recent redesign of the module. He has also been a dedicated and watchful +reviewer. Thanks, Kevin, for your contributions!
+Gabor’s major contributions to Rust have been in the area of language design. In +the last year he has produced a number of very high quality RFCs, and though +many of them of not yet been accepted, his ideas are often thought-provoking and +have had a strong influence on the direction of the language. His trait based +exception handling RFC was particularly innovative, as well that for +future-proofing checked arithmetic. Gabor is an exceedingly clever +Friend of the Tree.
+In the last few weeks, he has fixed many, many tricky ICEs all over the +compiler, but particularly in the area of unboxed closures and the borrow +checker. He has also completely rewritten how unboxed closures interact with +monomorphization and had a huge impact on making them usable. Brian Koropoff is +truly a Friend of the Tree.
+Alexis Beingessner (aka @Gankra) began contributing to Rust in July, and has +already had a major impact on several library-related areas. Her main focus has +been collections. She completely rewrote BTree, providing a vastly more complete +and efficient implementation. She proposed and implemented the new Entry API. +She’s written extensive new documentation for the collections crate. She pitched +in on collections reform.
+And she added collapse-all to rustdoc!
+Alexis is, without a doubt, a FOTT.
+Jorge has made several high-impact contributions to the wider Rust community. He +is the primary author of rustbyexample.com, and last week published “eulermark”, +a comparison of language performance on project Euler problems, which happily +showed Rust performing quite well. As part of his benchmarking work he has +ported the ‘criterion’ benchmarking framework to Rust.
+Contributing since April 2013. Björn has done many optimizations for Rust,
+including removing allocation bloat in iterators, fmt, and managed boxes;
+optimizing fail!
; adding strategic inlining in the libraries; speeding up data
+structures in the compiler; eliminating quadratic blowup in translation, and
+other IR bloat problems.
He’s really done an amazing number of optimizations to Rust.
+Most recently he earned huge kudos by teaching LLVM about the lifetime of +variables, allowing Rust to make much more efficient use of the stack.
+Björn is a total FOTT.
+Jonas Hietala, aka @treeman, has been contributing a large amount of +documentation examples recently for modules such as hashmap, treemap, +priority_queue, collections, bigint, and vec. He has also additionally been +fixing UI bugs in the compiler such as those related to format!
+Jonas continues to add new examples/documentation every day, making +documentation more approachable and understandable for all newcomers. Jonas +truly is a friend of the tree!
+Sven Nilson has done a great deal of work to build up the Rust crate ecosystem, +starting with the well-regarded rust-empty project that provides boilerplate +build infrastructure and - crucially - integrates well with other tools like +Cargo.
+His Piston project is one of the most promising Rust projects, and its one that +integrates a number of crates, stressing Rust’s tooling at just the right time: +when we need to start learning how to support large-scale external projects.
+Sven is a friend of the tree.
+jakub-, otherwise known as Jakub Wieczorek, has recently been working very hard +to improve and fix lots of match-related functionality, a place where very few +dare to venture! Most of this code appears to be untouched for quite some time +now, and it’s receiving some well-deserved love now.
+Jakub has fixed 10 bugs this month alone, many of which have been long-standing +problems in the compiler. He has also been very responsive in fixing bugs as +well as triaging issues that come up from fun match assertions.
+Jakub truly is a friend of the tree!
+klutzy has been doing an amazing amount of Windows work for years now. He picks +up issues that affect our quality on Windows and picks them off 1 by 1. It’s +tedious and doesn’t get a ton of thanks, but is hugely appreciated by us. As +part of the Korean community, he has also done a lot of work for the local +community there. He is a friend of the tree. Thank you!
+This week’s friend of the tree is Clark Gaebel. He just landed a huge first +contribution to Rust. He dove in and made our hashmaps significantly faster by +implementing Robin Hood hashing. He is an excellent friend of the tree.
+This section is for content that has become outdated, but that we want to keep +available to be read for historical/archival reasons.
+ +This is an archive of Rust release artifacts from 0.1–1.7.0. Each release is +signed with the Rust GPG signing key (older key, even +older key).
+In addition to the included short-form release in the mailing list, each +0.x release has a longer explanation in the release notes.
+This was an OS X bugfix release.
+Redirecting to... /release/backporting.html.
+ + diff --git a/bibliography.html b/bibliography.html new file mode 100644 index 000000000..20527dea3 --- /dev/null +++ b/bibliography.html @@ -0,0 +1,12 @@ + + + + +Redirecting to... https://rustc-dev-guide.rust-lang.org/appendix/bibliography.html.
+ + diff --git a/book.js b/book.js new file mode 100644 index 000000000..627a368e3 --- /dev/null +++ b/book.js @@ -0,0 +1,687 @@ +"use strict"; + +// Fix back button cache problem +window.onunload = function () { }; + +// Global variable, shared between modules +function playground_text(playground) { + let code_block = playground.querySelector("code"); + + if (window.ace && code_block.classList.contains("editable")) { + let editor = window.ace.edit(code_block); + return editor.getValue(); + } else { + return code_block.innerText; + } +} + +(function codeSnippets() { + function fetch_with_timeout(url, options, timeout = 6000) { + return Promise.race([ + fetch(url, options), + new Promise((_, reject) => setTimeout(() => reject(new Error('timeout')), timeout)) + ]); + } + + var playgrounds = Array.from(document.querySelectorAll(".playground")); + if (playgrounds.length > 0) { + fetch_with_timeout("https://play.rust-lang.org/meta/crates", { + headers: { + 'Content-Type': "application/json", + }, + method: 'POST', + mode: 'cors', + }) + .then(response => response.json()) + .then(response => { + // get list of crates available in the rust playground + let playground_crates = response.crates.map(item => item["id"]); + playgrounds.forEach(block => handle_crate_list_update(block, playground_crates)); + }); + } + + function handle_crate_list_update(playground_block, playground_crates) { + // update the play buttons after receiving the response + update_play_button(playground_block, playground_crates); + + // and install on change listener to dynamically update ACE editors + if (window.ace) { + let code_block = playground_block.querySelector("code"); + if (code_block.classList.contains("editable")) { + let editor = window.ace.edit(code_block); + editor.addEventListener("change", function (e) { + update_play_button(playground_block, playground_crates); + }); + // add Ctrl-Enter command to execute rust code + editor.commands.addCommand({ + name: "run", + bindKey: { + win: "Ctrl-Enter", + mac: "Ctrl-Enter" + }, + exec: _editor => run_rust_code(playground_block) + }); + } + } + } + + // updates the visibility of play button based on `no_run` class and + // used crates vs ones available on http://play.rust-lang.org + function update_play_button(pre_block, playground_crates) { + var play_button = pre_block.querySelector(".play-button"); + + // skip if code is `no_run` + if (pre_block.querySelector('code').classList.contains("no_run")) { + play_button.classList.add("hidden"); + return; + } + + // get list of `extern crate`'s from snippet + var txt = playground_text(pre_block); + var re = /extern\s+crate\s+([a-zA-Z_0-9]+)\s*;/g; + var snippet_crates = []; + var item; + while (item = re.exec(txt)) { + snippet_crates.push(item[1]); + } + + // check if all used crates are available on play.rust-lang.org + var all_available = snippet_crates.every(function (elem) { + return playground_crates.indexOf(elem) > -1; + }); + + if (all_available) { + play_button.classList.remove("hidden"); + } else { + play_button.classList.add("hidden"); + } + } + + function run_rust_code(code_block) { + var result_block = code_block.querySelector(".result"); + if (!result_block) { + result_block = document.createElement('code'); + result_block.className = 'result hljs language-bash'; + + code_block.append(result_block); + } + + let text = playground_text(code_block); + let classes = code_block.querySelector('code').classList; + let edition = "2015"; + if(classes.contains("edition2018")) { + edition = "2018"; + } else if(classes.contains("edition2021")) { + edition = "2021"; + } + var params = { + version: "stable", + optimize: "0", + code: text, + edition: edition + }; + + if (text.indexOf("#![feature") !== -1) { + params.version = "nightly"; + } + + result_block.innerText = "Running..."; + + fetch_with_timeout("https://play.rust-lang.org/evaluate.json", { + headers: { + 'Content-Type': "application/json", + }, + method: 'POST', + mode: 'cors', + body: JSON.stringify(params) + }) + .then(response => response.json()) + .then(response => { + if (response.result.trim() === '') { + result_block.innerText = "No output"; + result_block.classList.add("result-no-output"); + } else { + result_block.innerText = response.result; + result_block.classList.remove("result-no-output"); + } + }) + .catch(error => result_block.innerText = "Playground Communication: " + error.message); + } + + // Syntax highlighting Configuration + hljs.configure({ + tabReplace: ' ', // 4 spaces + languages: [], // Languages used for auto-detection + }); + + let code_nodes = Array + .from(document.querySelectorAll('code')) + // Don't highlight `inline code` blocks in headers. + .filter(function (node) {return !node.parentElement.classList.contains("header"); }); + + if (window.ace) { + // language-rust class needs to be removed for editable + // blocks or highlightjs will capture events + code_nodes + .filter(function (node) {return node.classList.contains("editable"); }) + .forEach(function (block) { block.classList.remove('language-rust'); }); + + Array + code_nodes + .filter(function (node) {return !node.classList.contains("editable"); }) + .forEach(function (block) { hljs.highlightBlock(block); }); + } else { + code_nodes.forEach(function (block) { hljs.highlightBlock(block); }); + } + + // Adding the hljs class gives code blocks the color css + // even if highlighting doesn't apply + code_nodes.forEach(function (block) { block.classList.add('hljs'); }); + + Array.from(document.querySelectorAll("code.language-rust")).forEach(function (block) { + + var lines = Array.from(block.querySelectorAll('.boring')); + // If no lines were hidden, return + if (!lines.length) { return; } + block.classList.add("hide-boring"); + + var buttons = document.createElement('div'); + buttons.className = 'buttons'; + buttons.innerHTML = ""; + + // add expand button + var pre_block = block.parentNode; + pre_block.insertBefore(buttons, pre_block.firstChild); + + pre_block.querySelector('.buttons').addEventListener('click', function (e) { + if (e.target.classList.contains('fa-eye')) { + e.target.classList.remove('fa-eye'); + e.target.classList.add('fa-eye-slash'); + e.target.title = 'Hide lines'; + e.target.setAttribute('aria-label', e.target.title); + + block.classList.remove('hide-boring'); + } else if (e.target.classList.contains('fa-eye-slash')) { + e.target.classList.remove('fa-eye-slash'); + e.target.classList.add('fa-eye'); + e.target.title = 'Show hidden lines'; + e.target.setAttribute('aria-label', e.target.title); + + block.classList.add('hide-boring'); + } + }); + }); + + if (window.playground_copyable) { + Array.from(document.querySelectorAll('pre code')).forEach(function (block) { + var pre_block = block.parentNode; + if (!pre_block.classList.contains('playground')) { + var buttons = pre_block.querySelector(".buttons"); + if (!buttons) { + buttons = document.createElement('div'); + buttons.className = 'buttons'; + pre_block.insertBefore(buttons, pre_block.firstChild); + } + + var clipButton = document.createElement('button'); + clipButton.className = 'fa fa-copy clip-button'; + clipButton.title = 'Copy to clipboard'; + clipButton.setAttribute('aria-label', clipButton.title); + clipButton.innerHTML = ''; + + buttons.insertBefore(clipButton, buttons.firstChild); + } + }); + } + + // Process playground code blocks + Array.from(document.querySelectorAll(".playground")).forEach(function (pre_block) { + // Add play button + var buttons = pre_block.querySelector(".buttons"); + if (!buttons) { + buttons = document.createElement('div'); + buttons.className = 'buttons'; + pre_block.insertBefore(buttons, pre_block.firstChild); + } + + var runCodeButton = document.createElement('button'); + runCodeButton.className = 'fa fa-play play-button'; + runCodeButton.hidden = true; + runCodeButton.title = 'Run this code'; + runCodeButton.setAttribute('aria-label', runCodeButton.title); + + buttons.insertBefore(runCodeButton, buttons.firstChild); + runCodeButton.addEventListener('click', function (e) { + run_rust_code(pre_block); + }); + + if (window.playground_copyable) { + var copyCodeClipboardButton = document.createElement('button'); + copyCodeClipboardButton.className = 'fa fa-copy clip-button'; + copyCodeClipboardButton.innerHTML = ''; + copyCodeClipboardButton.title = 'Copy to clipboard'; + copyCodeClipboardButton.setAttribute('aria-label', copyCodeClipboardButton.title); + + buttons.insertBefore(copyCodeClipboardButton, buttons.firstChild); + } + + let code_block = pre_block.querySelector("code"); + if (window.ace && code_block.classList.contains("editable")) { + var undoChangesButton = document.createElement('button'); + undoChangesButton.className = 'fa fa-history reset-button'; + undoChangesButton.title = 'Undo changes'; + undoChangesButton.setAttribute('aria-label', undoChangesButton.title); + + buttons.insertBefore(undoChangesButton, buttons.firstChild); + + undoChangesButton.addEventListener('click', function () { + let editor = window.ace.edit(code_block); + editor.setValue(editor.originalCode); + editor.clearSelection(); + }); + } + }); +})(); + +(function themes() { + var html = document.querySelector('html'); + var themeToggleButton = document.getElementById('theme-toggle'); + var themePopup = document.getElementById('theme-list'); + var themeColorMetaTag = document.querySelector('meta[name="theme-color"]'); + var stylesheets = { + ayuHighlight: document.querySelector("[href$='ayu-highlight.css']"), + tomorrowNight: document.querySelector("[href$='tomorrow-night.css']"), + highlight: document.querySelector("[href$='highlight.css']"), + }; + + function showThemes() { + themePopup.style.display = 'block'; + themeToggleButton.setAttribute('aria-expanded', true); + themePopup.querySelector("button#" + get_theme()).focus(); + } + + function updateThemeSelected() { + themePopup.querySelectorAll('.theme-selected').forEach(function (el) { + el.classList.remove('theme-selected'); + }); + themePopup.querySelector("button#" + get_theme()).classList.add('theme-selected'); + } + + function hideThemes() { + themePopup.style.display = 'none'; + themeToggleButton.setAttribute('aria-expanded', false); + themeToggleButton.focus(); + } + + function get_theme() { + var theme; + try { theme = localStorage.getItem('mdbook-theme'); } catch (e) { } + if (theme === null || theme === undefined) { + return default_theme; + } else { + return theme; + } + } + + function set_theme(theme, store = true) { + let ace_theme; + + if (theme == 'coal' || theme == 'navy') { + stylesheets.ayuHighlight.disabled = true; + stylesheets.tomorrowNight.disabled = false; + stylesheets.highlight.disabled = true; + + ace_theme = "ace/theme/tomorrow_night"; + } else if (theme == 'ayu') { + stylesheets.ayuHighlight.disabled = false; + stylesheets.tomorrowNight.disabled = true; + stylesheets.highlight.disabled = true; + ace_theme = "ace/theme/tomorrow_night"; + } else { + stylesheets.ayuHighlight.disabled = true; + stylesheets.tomorrowNight.disabled = true; + stylesheets.highlight.disabled = false; + ace_theme = "ace/theme/dawn"; + } + + setTimeout(function () { + themeColorMetaTag.content = getComputedStyle(document.body).backgroundColor; + }, 1); + + if (window.ace && window.editors) { + window.editors.forEach(function (editor) { + editor.setTheme(ace_theme); + }); + } + + var previousTheme = get_theme(); + + if (store) { + try { localStorage.setItem('mdbook-theme', theme); } catch (e) { } + } + + html.classList.remove(previousTheme); + html.classList.add(theme); + updateThemeSelected(); + } + + // Set theme + var theme = get_theme(); + + set_theme(theme, false); + + themeToggleButton.addEventListener('click', function () { + if (themePopup.style.display === 'block') { + hideThemes(); + } else { + showThemes(); + } + }); + + themePopup.addEventListener('click', function (e) { + var theme; + if (e.target.className === "theme") { + theme = e.target.id; + } else if (e.target.parentElement.className === "theme") { + theme = e.target.parentElement.id; + } else { + return; + } + set_theme(theme); + }); + + themePopup.addEventListener('focusout', function(e) { + // e.relatedTarget is null in Safari and Firefox on macOS (see workaround below) + if (!!e.relatedTarget && !themeToggleButton.contains(e.relatedTarget) && !themePopup.contains(e.relatedTarget)) { + hideThemes(); + } + }); + + // Should not be needed, but it works around an issue on macOS & iOS: https://github.com/rust-lang/mdBook/issues/628 + document.addEventListener('click', function(e) { + if (themePopup.style.display === 'block' && !themeToggleButton.contains(e.target) && !themePopup.contains(e.target)) { + hideThemes(); + } + }); + + document.addEventListener('keydown', function (e) { + if (e.altKey || e.ctrlKey || e.metaKey || e.shiftKey) { return; } + if (!themePopup.contains(e.target)) { return; } + + switch (e.key) { + case 'Escape': + e.preventDefault(); + hideThemes(); + break; + case 'ArrowUp': + e.preventDefault(); + var li = document.activeElement.parentElement; + if (li && li.previousElementSibling) { + li.previousElementSibling.querySelector('button').focus(); + } + break; + case 'ArrowDown': + e.preventDefault(); + var li = document.activeElement.parentElement; + if (li && li.nextElementSibling) { + li.nextElementSibling.querySelector('button').focus(); + } + break; + case 'Home': + e.preventDefault(); + themePopup.querySelector('li:first-child button').focus(); + break; + case 'End': + e.preventDefault(); + themePopup.querySelector('li:last-child button').focus(); + break; + } + }); +})(); + +(function sidebar() { + var html = document.querySelector("html"); + var sidebar = document.getElementById("sidebar"); + var sidebarLinks = document.querySelectorAll('#sidebar a'); + var sidebarToggleButton = document.getElementById("sidebar-toggle"); + var sidebarResizeHandle = document.getElementById("sidebar-resize-handle"); + var firstContact = null; + + function showSidebar() { + html.classList.remove('sidebar-hidden') + html.classList.add('sidebar-visible'); + Array.from(sidebarLinks).forEach(function (link) { + link.setAttribute('tabIndex', 0); + }); + sidebarToggleButton.setAttribute('aria-expanded', true); + sidebar.setAttribute('aria-hidden', false); + try { localStorage.setItem('mdbook-sidebar', 'visible'); } catch (e) { } + } + + + var sidebarAnchorToggles = document.querySelectorAll('#sidebar a.toggle'); + + function toggleSection(ev) { + ev.currentTarget.parentElement.classList.toggle('expanded'); + } + + Array.from(sidebarAnchorToggles).forEach(function (el) { + el.addEventListener('click', toggleSection); + }); + + function hideSidebar() { + html.classList.remove('sidebar-visible') + html.classList.add('sidebar-hidden'); + Array.from(sidebarLinks).forEach(function (link) { + link.setAttribute('tabIndex', -1); + }); + sidebarToggleButton.setAttribute('aria-expanded', false); + sidebar.setAttribute('aria-hidden', true); + try { localStorage.setItem('mdbook-sidebar', 'hidden'); } catch (e) { } + } + + // Toggle sidebar + sidebarToggleButton.addEventListener('click', function sidebarToggle() { + if (html.classList.contains("sidebar-hidden")) { + var current_width = parseInt( + document.documentElement.style.getPropertyValue('--sidebar-width'), 10); + if (current_width < 150) { + document.documentElement.style.setProperty('--sidebar-width', '150px'); + } + showSidebar(); + } else if (html.classList.contains("sidebar-visible")) { + hideSidebar(); + } else { + if (getComputedStyle(sidebar)['transform'] === 'none') { + hideSidebar(); + } else { + showSidebar(); + } + } + }); + + sidebarResizeHandle.addEventListener('mousedown', initResize, false); + + function initResize(e) { + window.addEventListener('mousemove', resize, false); + window.addEventListener('mouseup', stopResize, false); + html.classList.add('sidebar-resizing'); + } + function resize(e) { + var pos = (e.clientX - sidebar.offsetLeft); + if (pos < 20) { + hideSidebar(); + } else { + if (html.classList.contains("sidebar-hidden")) { + showSidebar(); + } + pos = Math.min(pos, window.innerWidth - 100); + document.documentElement.style.setProperty('--sidebar-width', pos + 'px'); + } + } + //on mouseup remove windows functions mousemove & mouseup + function stopResize(e) { + html.classList.remove('sidebar-resizing'); + window.removeEventListener('mousemove', resize, false); + window.removeEventListener('mouseup', stopResize, false); + } + + document.addEventListener('touchstart', function (e) { + firstContact = { + x: e.touches[0].clientX, + time: Date.now() + }; + }, { passive: true }); + + document.addEventListener('touchmove', function (e) { + if (!firstContact) + return; + + var curX = e.touches[0].clientX; + var xDiff = curX - firstContact.x, + tDiff = Date.now() - firstContact.time; + + if (tDiff < 250 && Math.abs(xDiff) >= 150) { + if (xDiff >= 0 && firstContact.x < Math.min(document.body.clientWidth * 0.25, 300)) + showSidebar(); + else if (xDiff < 0 && curX < 300) + hideSidebar(); + + firstContact = null; + } + }, { passive: true }); + + // Scroll sidebar to current active section + var activeSection = document.getElementById("sidebar").querySelector(".active"); + if (activeSection) { + // https://developer.mozilla.org/en-US/docs/Web/API/Element/scrollIntoView + activeSection.scrollIntoView({ block: 'center' }); + } +})(); + +(function chapterNavigation() { + document.addEventListener('keydown', function (e) { + if (e.altKey || e.ctrlKey || e.metaKey || e.shiftKey) { return; } + if (window.search && window.search.hasFocus()) { return; } + + switch (e.key) { + case 'ArrowRight': + e.preventDefault(); + var nextButton = document.querySelector('.nav-chapters.next'); + if (nextButton) { + window.location.href = nextButton.href; + } + break; + case 'ArrowLeft': + e.preventDefault(); + var previousButton = document.querySelector('.nav-chapters.previous'); + if (previousButton) { + window.location.href = previousButton.href; + } + break; + } + }); +})(); + +(function clipboard() { + var clipButtons = document.querySelectorAll('.clip-button'); + + function hideTooltip(elem) { + elem.firstChild.innerText = ""; + elem.className = 'fa fa-copy clip-button'; + } + + function showTooltip(elem, msg) { + elem.firstChild.innerText = msg; + elem.className = 'fa fa-copy tooltipped'; + } + + var clipboardSnippets = new ClipboardJS('.clip-button', { + text: function (trigger) { + hideTooltip(trigger); + let playground = trigger.closest("pre"); + return playground_text(playground); + } + }); + + Array.from(clipButtons).forEach(function (clipButton) { + clipButton.addEventListener('mouseout', function (e) { + hideTooltip(e.currentTarget); + }); + }); + + clipboardSnippets.on('success', function (e) { + e.clearSelection(); + showTooltip(e.trigger, "Copied!"); + }); + + clipboardSnippets.on('error', function (e) { + showTooltip(e.trigger, "Clipboard error!"); + }); +})(); + +(function scrollToTop () { + var menuTitle = document.querySelector('.menu-title'); + + menuTitle.addEventListener('click', function () { + document.scrollingElement.scrollTo({ top: 0, behavior: 'smooth' }); + }); +})(); + +(function controllMenu() { + var menu = document.getElementById('menu-bar'); + + (function controllPosition() { + var scrollTop = document.scrollingElement.scrollTop; + var prevScrollTop = scrollTop; + var minMenuY = -menu.clientHeight - 50; + // When the script loads, the page can be at any scroll (e.g. if you reforesh it). + menu.style.top = scrollTop + 'px'; + // Same as parseInt(menu.style.top.slice(0, -2), but faster + var topCache = menu.style.top.slice(0, -2); + menu.classList.remove('sticky'); + var stickyCache = false; // Same as menu.classList.contains('sticky'), but faster + document.addEventListener('scroll', function () { + scrollTop = Math.max(document.scrollingElement.scrollTop, 0); + // `null` means that it doesn't need to be updated + var nextSticky = null; + var nextTop = null; + var scrollDown = scrollTop > prevScrollTop; + var menuPosAbsoluteY = topCache - scrollTop; + if (scrollDown) { + nextSticky = false; + if (menuPosAbsoluteY > 0) { + nextTop = prevScrollTop; + } + } else { + if (menuPosAbsoluteY > 0) { + nextSticky = true; + } else if (menuPosAbsoluteY < minMenuY) { + nextTop = prevScrollTop + minMenuY; + } + } + if (nextSticky === true && stickyCache === false) { + menu.classList.add('sticky'); + stickyCache = true; + } else if (nextSticky === false && stickyCache === true) { + menu.classList.remove('sticky'); + stickyCache = false; + } + if (nextTop !== null) { + menu.style.top = nextTop + 'px'; + topCache = nextTop; + } + prevScrollTop = scrollTop; + }, { passive: true }); + })(); + (function controllBorder() { + menu.classList.remove('bordered'); + document.addEventListener('scroll', function () { + if (menu.offsetTop === 0) { + menu.classList.remove('bordered'); + } else { + menu.classList.add('bordered'); + } + }, { passive: true }); + })(); +})(); diff --git a/channel-layout.html b/channel-layout.html new file mode 100644 index 000000000..280ce8cb4 --- /dev/null +++ b/channel-layout.html @@ -0,0 +1,12 @@ + + + + +Redirecting to... /infra/channel-layout.html.
+ + diff --git a/chat/discord.html b/chat/discord.html new file mode 100644 index 000000000..bdc4bd042 --- /dev/null +++ b/chat/discord.html @@ -0,0 +1,12 @@ + + + + +Redirecting to... platforms/discord.html.
+ + diff --git a/chat/email.html b/chat/email.html new file mode 100644 index 000000000..27de5b510 --- /dev/null +++ b/chat/email.html @@ -0,0 +1,12 @@ + + + + +Redirecting to... platforms/email.html.
+ + diff --git a/chat/index.html b/chat/index.html new file mode 100644 index 000000000..d1d4265df --- /dev/null +++ b/chat/index.html @@ -0,0 +1,12 @@ + + + + +Redirecting to... /platforms/index.html.
+ + diff --git a/chat/zulip.html b/chat/zulip.html new file mode 100644 index 000000000..bd3896cac --- /dev/null +++ b/chat/zulip.html @@ -0,0 +1,12 @@ + + + + +Redirecting to... platforms/zulip.html.
+ + diff --git a/chat/zulip/index.html b/chat/zulip/index.html new file mode 100644 index 000000000..203c49cd1 --- /dev/null +++ b/chat/zulip/index.html @@ -0,0 +1,12 @@ + + + + +Redirecting to... platforms/zulip/index.html.
+ + diff --git a/clipboard.min.js b/clipboard.min.js new file mode 100644 index 000000000..02c549e35 --- /dev/null +++ b/clipboard.min.js @@ -0,0 +1,7 @@ +/*! + * clipboard.js v2.0.4 + * https://zenorocha.github.io/clipboard.js + * + * Licensed MIT © Zeno Rocha + */ +!function(t,e){"object"==typeof exports&&"object"==typeof module?module.exports=e():"function"==typeof define&&define.amd?define([],e):"object"==typeof exports?exports.ClipboardJS=e():t.ClipboardJS=e()}(this,function(){return function(n){var o={};function r(t){if(o[t])return o[t].exports;var e=o[t]={i:t,l:!1,exports:{}};return n[t].call(e.exports,e,e.exports,r),e.l=!0,e.exports}return r.m=n,r.c=o,r.d=function(t,e,n){r.o(t,e)||Object.defineProperty(t,e,{enumerable:!0,get:n})},r.r=function(t){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(t,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(t,"__esModule",{value:!0})},r.t=function(e,t){if(1&t&&(e=r(e)),8&t)return e;if(4&t&&"object"==typeof e&&e&&e.__esModule)return e;var n=Object.create(null);if(r.r(n),Object.defineProperty(n,"default",{enumerable:!0,value:e}),2&t&&"string"!=typeof e)for(var o in e)r.d(n,o,function(t){return e[t]}.bind(null,o));return n},r.n=function(t){var e=t&&t.__esModule?function(){return t.default}:function(){return t};return r.d(e,"a",e),e},r.o=function(t,e){return Object.prototype.hasOwnProperty.call(t,e)},r.p="",r(r.s=0)}([function(t,e,n){"use strict";var r="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(t){return typeof t}:function(t){return t&&"function"==typeof Symbol&&t.constructor===Symbol&&t!==Symbol.prototype?"symbol":typeof t},i=function(){function o(t,e){for(var n=0;nThis section documents the processes of the community team, and related projects.
+In this FAQ we try to answer common questions about the Annual State of the Rust Language Community Survey. If in your opinion there is a missing question or if you have a concern about this document, please do not hesitate to contact the Rust Community Team or open an issue with the Community Team.
+Rust is an Open Source project. As such, we want to hear both from people inside and outside our ecosystem about the language, how it is perceived, and how we can make the language more accessible and our community more welcoming. This feedback will give our community the opportunity to participate on shaping the future of the project. We want to focus in the requirements of the language current and potential users to offer a compelling tool for them to solve real world problems in a safe, efficient and modern way.
+In average, it should take from 10 to 15 minutes.
+It includes some basic questions about how do responders use Rust, their opinion the ecosystem’s tools and libraries, some basic questions regarding the responders’ employer or organization and their intention to use Rust, technical background and demographic questions and some feedback related to the Rust project’s community activities and general priorities.
+The answers from the survey will be anonymized, aggregated, and summarized. A high level writeup will be posted to https://blog.rust-lang.org.
+Nearly every question in the survey is optional. You are welcome to share as much or as little information as you are comfortable with. Only the Rust language Core Team and the Community Team Survey Leads will have access to the raw data from the survey. All the answers are anonymized prior to be shared with the rest of the teams and prior to the results publication.
+The survey optionally collects contact information for the following cases if you expressed interest in:
+If you would like to be contacted about any of this, or any other concerns, but you don’t want to associate your email with your survey responses, you can instead email the Rust Community Team at community-team@rust-lang.org or the Core Team at core-team@rust-lang.org, and we will connect you to the right people.
+We expect to publish results from the survey within a month or two of the survey completion. The survey results will be posted to project’s blog.
+Redirecting to... https://rustc-dev-guide.rust-lang.org/bug-fix-procedure.html.
+ + diff --git a/compiler/cross-compilation/index.html b/compiler/cross-compilation/index.html new file mode 100644 index 000000000..fb2faf01a --- /dev/null +++ b/compiler/cross-compilation/index.html @@ -0,0 +1,182 @@ + + + + + +This subsection documents cross compiling your code on one platform to another.
+ +C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\lib
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.10.24728\lib
C:\Program Files (x86)\Windows Kits\10\Lib\10.0.14393.0
C:\Program Files (x86)\Windows Kits\8.1\Lib\winv6.3
lib
in
+the above paths with include
to get the appropriate headers.[target.x86_64-pc-windows-msvc] linker = "lld-link"
+or whatever your lld pretending to be link.exe is called.--target=x86_64-pc-windows-msvc
while building. Hopefully it works. If it
+doesn’t, well… I don’t know.If you are a member of another team and would like to raise an issue with the +compiler team..
+Write a comment on a GitHub issue describing the reason for the nomination
+(i.e. what decision needs to be made/what opinion is sought; what are the
+relevant parts to the compiler team, etc) and add the I-compiler-nominated
+label to a issue (you can include @rustbot label +I-compiler-nominated
in
+your comment to do this).
Once nominated, the issue will be discussed in a upcoming triage +meeting. The compiler team doesn’t always get through +all nominated issues each week, so it can take more than one meeting for your +issue to be discussed.
+Once discussed, a member of the team will comment on the issue with the +conclusion of the discussion and linking to the relevant Zulip chat.
+If there is an existing working relationship between a member of the requesting +team and a contributor to the compiler, then the first option that a team has +for requesting tasks be completed is to ping that contributor and ask if they +can complete the task. It is recommended that pings take place in public Zulip +channels so that..
+It is worth considering the available bandwidth of the contributor that the +request is being made of, and whether their areas of expertise in the compiler +are relevant.
+When there is not a appropriate contact in the compiler team to reach out to +directly, write a comment on a GitHub issue (or create an issue) describing the +task that needs completed. Teams should nominate issues for the compiler team +when issues..
+I-prioritize
label and it will be enqueued for prioritization.A detailed description of the feature being requested or the bug to be fixed is +helpful wherever possible (so that the compiler contributor does not need to +make a guess as to a solution that would solve the problem for the requesting +team). If a member of the requesting team isn’t explicitly listed as the +point-of-contact for the issue, then the author of the comment will be assumed +to be the point-of-contact.
+Add the I-compiler-nominated
label to a issue (you can use @rustbot label +I-compiler-nominated
to do this).
Once nominated, the issue will be discussed in a upcoming triage +meeting. The compiler team doesn’t always get through +all nominated issues each week, so it can take more than one meeting for your +issue to be discussed. In the compiler team’s discussion, the issue may..
+Redirecting to... https://rustc-dev-guide.rust-lang.org/diagnostics/diagnostic-codes.html.
+ + diff --git a/compiler/index.html b/compiler/index.html new file mode 100644 index 000000000..347f5ba08 --- /dev/null +++ b/compiler/index.html @@ -0,0 +1,192 @@ + + + + + +This section documents the Rust compiler itself, its APIs, and how to +contribute and provide bug fixes for the compiler.
+FIXME
comments in the Rust compiler.Introduced in RFC 2904, a “major change proposal” is a lightweight
+form of RFC that the compiler team uses for architectural changes that
+are not end-user facing. (It can also be used for small user-facing
+changes like adding new compiler flags, though in that case we also
+require an rfcbot fcp
to get full approval from the team.) Larger
+changes or modifications to the Rust language itself require a full
+RFC (the latter fall under the lang team’s purview).
As the compiler grows in complexity, it becomes harder and harder to track what’s going on. We don’t currently have a clear channel for people to signal their intention to make “major changes” that may impact other developers in a lightweight way (and potentially receive feedback).
+Our goal is to create a channel for signaling intentions that lies somewhere between opening a PR (and perhaps cc’ing others on that PR) and creating a compiler team design meeting proposal or RFC.
+Our goals with the MCP are as follows:
+If you would like to make a major change to the compiler, the process is as follows:
+#t-compiler/major changes
will automatically be created for you by a bot.rfcbot fcp merge
comment.Some major change proposals will be conditionally accepted. This indicates that we’d like to see the work land, but we’d like to re-evaluate the decision of whether to commit to the design after we’ve had time to gain experience. We should try to be clear about the things we’d like to evaluate, and ideally a timeline.
+Some proposals will not be accepted. Some of the possible reasons:
+The PR should be closed or marked as blocked, with a request to create +a major change proposal first.
+If the PR description already contains suitable text that could serve +as an MCP, then simply copy and paste that into an MCP issue. Using an +issue consistently helps to ensure that the tooling and process works +smoothly.
+Of course! You are free to work on PRs or write code. But those PRs should be marked as experimental and they should not land, nor should anyone be expected to review them (unless folks want to).
+The rough intuition is “something that would require updates to the rustc-dev-guide or the rustc book”. In other words:
+Note that, in some cases, the change may be deemed too big and a full FCP or RFC may be required to move forward. This could occur with significant public facing change or with sufficiently large changes to the architecture. The compiler team leads can make this call.
+Note that whether something is a major change proposal is not necessarily related to the number of lines of code that are affected. Renaming a method can affect a large number of lines, and even require edits to the rustc-dev-guide, but it may not be a major change. At the same time, changing names that are very broadly used could constitute a major change (for example, renaming from the tcx
context in the compiler to something else would be a major change).
The MCP “seconding” process is only meant to be used to get agreement
+on the technical architecture we plan to use. It is not sufficient to
+stabilize new features or make public-facing changes like adding a -C
+flag. For that, an rfcbot fcp
is required (or perhaps an RFC, if the
+change is large enough).
For landing compiler flags in particular, a good approach is to start
+with an MCP introducing a -Z
flag and then “stabilize” the flag by
+moving it to -C
in a PR later (which would require rfcbot fcp
).
Major change proposals are not sufficient for language changes or +changes that affect cargo.
+#t-compiler/major changes
:
+Please direct technical conversation to the Zulip stream.
+The compiler-team repo issues are intended to be low traffic and used for procedural purposes. Note that to “second” a design or offer to review, you should be someone who is familiar with the code, typically but not necessarily a compiler team member or contributor.
+These types of procedural comments can be left on the issue (it’s also good to leave a message in Zulip). See the previous section.
+Usually the experts in the given area will reach a consensus here. But if there is some need for a “tie breaker” vote or judgment call, the compiler-team leads make the final call.
+Here are some examples of changes that were made in the past that would warrant the major change process:
+Ty
type-C
flag that exposes some minor variantHere are some examples of changes that are too big for the major change process, or which at least would require auxiliary design meetings or a more fleshed out design before they can proceed:
+Here are some examples of things that don’t merit any MCP:
+Major Change Proposals can be closed:
+This team discusses membership in the compiler team. There are currently two levels of membership:
+People who are looking to contribute to the compiler typically start +in one of two ways. They may tackle “one off” issues, or they may get +involved in some kind of existing working group. They don’t know much +about the compiler yet and have no particular privileges. They are +assigned to issues using the triagebot and (typically) work with a +mentor or mentoring instructions.
+Once a working group participant has been contributing regularly for +some time, they can be promoted to the level of a compiler team +contributor (see the section on how decisions are made +below). This title indicates that they are someone who contributes +regularly.
+It is hard to define the precise conditions when such a promotion is +appropriate. Being promoted to contributor is not just a function of +checking various boxes. But the general sense is that someone is ready +when they have demonstrated three things:
+Being promoted to contributor implies a number of privileges:
+It also implies some obligations (in some cases, optional obligations):
+As a contributor gains in experience, they may be asked to become a +compiler team member. This implies that they are not only a +regular contributor, but are actively helping to shape the direction +of the team or some part of the compiler (or multiple parts).
+Promotion decisions (from participant to contributor, and from
+contributor to member) are made by having an active team member send
+an e-mail to the alias compiler-private@rust-lang.org
. This e-mail
+should include:
Compiler-team members should send e-mail giving their explicit assent, +or with objections. Objections should always be resolved before the +decision is made final. E-mails can also include edits or additions for the +public announcement.
+To make the final decision:
+We do not require all team members to send e-mail, as historically +these decisions are not particularly controversial. For promotion to a +contributor, the only requirement is that the compiler team lead +agrees. For promotion to a full member, more explicit mails in favor +are recommended.
+Once we have decided to promote, then the announcement can be posted +to internals, and the person added to the team repository.
+It is worth emphasizing that becoming a contributor or member of the +compiler team does not necessarily imply writing PRs. There are a wide +variety of tasks that need to be done to support the compiler and +which should make one eligible for membership. Such tasks would +include organizing meetings, participating in meetings, bisecting and +triaging issues, writing documentation, working on the rustc-dev-guide. +The most important criteria for elevation to contributor, +in particular, is regular and consistent participation. The most +important criteria for elevation to member is actively shaping the +direction of the team or compiler.
+If at any time a current contributor or member wishes to take a break +from participating, they can opt to put themselves into alumni status. +When in alumni status, they will be removed from Github aliases and +the like, so that they need not be bothered with pings and messages. +They will also not have r+ privileges. Alumni members will however +still remain members of the GitHub org overall.
+People in alumni status can ask to return to “active” status at any +time. This request would ordinarily be granted automatically barring +extraordinary circumstances.
+People in alumni status are still members of the team at the level +they previously attained and they may publicly indicate that, though +they should indicate the time period for which they were active as +well.
+If desired, a team member may also ask to move back to contributor +status. This would indicate a continued desire to be involved in +rustc, but that they do not wish to be involved in some of the +weightier decisions, such as who to add to the team. Like full alumni, +people who were once full team members but who went back to +contributor status may ask to return to full team member status. This +request would ordinarily be granted automatically barring +extraordinary circumstances.
+If a contributor or a member has been inactive in the compiler for 6 +months, then we will ask them if they would like to go to alumni +status. If they respond yes or do not respond, they can be placed on +alumni status. If they would prefer to remain active, that is also +fine, but they will get asked again periodically if they continue to +be inactive.
+ +So you want to add a new command-line flag to rustc. What is the procedure?
+The first question to ask yourself is:
+-Ztreat-err-as-bug
)?If so, you can just add it in a PR, no check-off is required beyond ordinary review.
+If this option is meant to be used by end-users or to be exposed on the stable channel, however, it represents a “public commitment” on the part of rustc that we will have to maintain, and hence there are a few more details to take care of.
+There are two main things to take care of, and they can proceed in either order, but both must be completed:
+Finally, some options begin as unstable and only get stabilized over time, in which case you will also need:
+The “proposal” part describes the motivation and design of the new option you wish to add. It doesn’t necessarily have to be very long. It takes the form of a Major Change Proposal.
+The proposal should include the following:
+Note that it is fine if you don’t have any implementation notes, precedent, or alternatives to discuss.
+Also, one good approach to writing the MCP is basically to write the documentation you will have to write anyway to explain to users how the option works, and then add any additional notes on alternatives and so forth that are required.
+Once you’ve written up the proposal, you can open a MCP issue. But note that since this MCP is promoting a permanent change, a full compiler-team FCP is required, and not just a “second”. This can be done by @rfcbot fcp merge
by a team member.
Naturally your new option will also have to be implemented. You can implement the option and open up a PR. Often, this implementation work actually happens before the MCP is created, and that’s fine – we’ll just ask you to open an MCP with the write-up.
+See the Command-line Arguments chapter in the rustc dev guide for guidelines on how to name and define a new argument.
+A few notes that are sometimes overlooked:
+-Z
or because they require -Zunstable-options
to use.Typically options begin as unstable, meaning that they are either used with -Z
or require -Zunstable-options
.
Once the issue lands we should create a tracking issue that links to the MCP and where stabilization can be proposed.
+Stabilization generally proceeds when the option has a seen a bit of use and the implementation seems to be working as expected for its intended purpose.
+Remember that when stabilization occurs, documentation should be moved from the Unstable Book to the Rustc Book.
+ +The compiler team has a number of notification groups that we use to +ping people and draw their attention to issues. Notification groups +are setup so that anyone can join them if they want.
+If you’d like to create a notification group, here are the steps. +First, you want to get approval from the compiler team:
+O-Windows
.Once the MCP is accepted, here are the steps to actually create the group. +In some cases we include an example PR from some other group.
+This section documents the processes of the prioritization WG.
+ +As the compiler team’s resources are limited, the prioritization working group’s main goal is to identify the most relevant issues to work on, so that the compiler team can focus on what matters the most.
+issue
refers to bugs and feature requests that are nominated for prioritization, by flagging the I-prioritize
label as described below.
This document will define what each label means, and what strategy for each label will be used.
+Labeling an issue as I-prioritize
starts the prioritization process, which will end by removing the I-prioritize
label and appending one of the 4 labels we will discuss below:
Each of these labels defines a strategy the team will adopt regarding:
+A P-critical
issue is a potentially blocker issue.
The Working Group will keep track of these issues and will remind the compiler team on a weekly basis during the triage meeting.
+Examples of things we typically judge to be “critical” bugs:
+std::vec::Vec
docs state order in which it drops its elements is subject to change)A P-critical issue will receive the most attention. It must be assigned one or several people as soon as possible, and the rest of the team should do their best to help them out if/when applicable.
+P-high
issues are issues that need attention from the compiler team, but not to the point that they need to be discussed at every meeting.
+They can be P-critical
issues that have a mitigating condition as defined above, or important issues that aren’t deemed blockers.
Because there are too many P-high
issues to fit in every compiler meeting, they should rather be handled asynchronously by the Prioritization WG, in order to help them move forward. They can still occasionally be brought up at meetings when it is deemed necessary.
The effectiveness of the Prioritization WG will be a direct consequence of our ability to draw the line between P-critical
and P-high
issues. There shouldn’t be too many P-critical
issues that compiler meetings become unmanageable, but critical issues shouldn’t get lost in the list of P-high issues.
P-high issues are issues the teams will mostly work on. We want to make sure they’re assigned, and keep an eye on them.
+P-medium
refer to issues that aren’t a priority for the team, and that will be resolved in the long run. Eg issues that will be fixed after a specific feature has landed.
+They are issues we would mentor someone interested in fixing.
+They will remain in this state until someone complains, a community member fixes it, or it gets fixed by accident.
P-low
refer to issues issue that the compiler team doesn’t plan to resolve, but are still worth fixing.
This document details the procedure the WG-prioritization follows to fill the agenda for the weekly meeting of T-compiler
.
+The working group focuses mainly on triaging T-compiler
regressions, identifying possibly critical (and thus potential release blocker) issues and building the agenda for the weekly T-compiler
meeting summarizing the main points to be discussed.
regression-*
labels)A-*
labels)The T-compiler
agenda is generated from a template (available on HackMD or Github). We suggest working the following steps in this order:
T-compiler
labels where appropriateI-prioritize
I-compiler-nominated
(i.e. needing a T-compiler discussion)P-high
Regressions labeled with I-prioritize
are signaling that a priority assessment is waiting. When this label is added to an issue, the triagebot
creates automatically a notification for @WG-prioritization members on the Zulip stream.
To assign a priority, we replace the I-prioritize
label with one of P-critical
, P-high
, P-medium
or P-low
and adding a succinct comment to link the Zulip discussion where the issue prioritization occurred, example of a template for the comment:
++WG-prioritization assigning priority (Zulip discussion).
+@rustbot label -I-prioritize +P-XXX
+
Ideally, we want all T-compiler
issues with a I-prioritize
label to have a priority assigned, or strive to reach this goal: sometimes different factors are blocking issues from being assigned a priority label, either because the report or the context is unclear or because cannot be reproduced and an MCVE would help. Don’t hesitate to ask for clarifications to the issue reporter or ping the ICEbreaker
team when an ICE (“Internal Compiler Errors”) needs a reduction (add a comment on the issue with @rustbot ping icebreakers-cleanup-crew
)
Keep an eye also on regressions (stable, beta and nightly), ideally they should an assignee.
+An MCP is a Major Change Proposal, in other words a change to the rust compiler that needs a bit more thought and discussion within the compiler team than a pull request. The life cycle of an MCP is described in the documentation. The relevant part for the WG-Prioritization is keeping an eye on them and accept all MCPs that have been on final-comment-period
for 10 or more days.
To accept an MCP, remove final-comment-period
label, add major-change-accepted
label and close the issue. A notification to the relevant Zulip topic (in this stream) will be automatically sent by the triagebot
.
Run triagebot’s CLI to generate the agenda. You need to clone https://github.com/rust-lang/triagebot (there is no official prepackaged release for this tool) and export two environment variables: GITHUB_API_TOKEN
and optionally a GOOGLE_API_KEY
to access a public Google calendar (if this env var is not found, meetings should be manually copy&pasted from here).
To generate the meeting’s agenda, run:
+$ cargo run --bin prioritization-agenda
+
+Copy the content of the generated agenda on HackMD. This will be our starting point.
+Paste the markdown file of this week performance triage logs to the agenda and clean it up a little bit removing emojis (to make the text readable when pasted on Zulip).
+About two hours before the scheduled meeting, create a new topic on the Zulip stream #t-compiler/meetings
titled “[weekly] YYYY-MM-DD” using the the following message template:
Hi @*T-compiler/meeting*; the triage meeting will happen tomorrow in about 2 hours.
+*WG-prioritization* has done pre-triage in #**t-compiler/wg-prioritization/alerts**
+@*WG-prioritization* has prepared the [meeting agenda](link_to_hackmd_agenda)
+
+Working group checkins for today:
+- @**WG-foo** by @**person1**
+- @**WG-bar** by @**person2**
+
+Working Group checkins rotation are generated by a script at this page (TODO: script is outdated and could probably be merged into the triagebot
CLI code).
Checkins about the progress of working groups are not mandatory but we rotate them all to be sure we don’t miss on important progresses.
+These are pull requests that the compiler team might want to backport to a release channel. Example a stable-to-beta-regression
fix might want to be backported to the beta release channel. A stable-to-stable-regression
fix particularly annoying might warrant a point release (i.e. release a 1.67.1
after a 1.67.0
).
Follow the General issues review process.
+These are pull requests waiting on a discussion / decision from T-compiler
(sometimes more than one team).
Try to follow the General issues review process. Explicitly nominate any issue that can be quickly resolved in a triage meeting.
+This is probably the less automatable part of the agenda (and likely the least fun). The triagebot
will emit a list of 50 pull requests ordering them by least recent update. The idea is to issue mentions to assigned reviewers during the meeting ensuring that they stay on top of them. We usually try to keep the number of these mentions to around 5 for each meeting.
There are two human factors here to keep in consideration:
+Striking a balance between these two diverging forces requires some empathy and “tribal knowledge” that comes with practice. Other factors can be blocking a pull request progress:
+S-waiting-on-review
and S-waiting-on author
handling the life cycle of a pull request are not promptly applied. A pull request that is ready to be reviewed but it’s not labeled S-waiting-on-review
is idling for no purpose.P-critical
and P-high
regressions without an assigneeTry to follow the General issues review process.
+Issues labeled with I-compiler-nominated
generally are nominated to specifically have the compiler team dedicate them a special slice of the meeting (generally towards the end). After the discussion, add a comment on Github linking the Zulip message where the discussion started (so everyone can read). T-compiler
sometimes writes a summary of the discussion on the issue itself.
Try to follow the General issues review process:
+I-compiler-nominated
Re-run the triagebot CLI script and update the agenda on HackMD with new data (if any). This is useful when there are last second changes affecting the agenda content.
+The meeting is over! Time to cleanup a little bit.
+Lock the agenda file on HackMD assigning write permissions to Owners
. Download the markdown file and commit it to this repository.
Remove the to-announce
label from MCPs, unless this label was added exactly during the meeting (and therefore will be seen during the following meeting).
Remove to-announce
FCPs from rust repo, compiler-team repo and forge repo, same disclaimer as before.
Accept or decline beta nominated
and stable nominated
backports that have been accepted during the meeting. For more info check T-release backporting docs
{beta,stable}-accepted
label and keep the {beta,stable}-nominated
label. Other automated procedures will process these pull requests, it’s important to leave both labels. Add a comment on Github linking the Zulip discussion.{beta,stable}-nominated
label. Add a comment on Github explaining why the backport was declined and link the Zulip discussion.Remove I-compiler-nominated
label from issues that were discussed. Sometimes not all nominated issues are discussed (because of time constraints). In this case the I-compiler-nominated
will stick until next meeting.
Create a new agenda stub for the following week using our template and post the link on Zulip, so it’s available for people if they want to add content during the week.
+Redirecting to... https://rustc-dev-guide.rust-lang.org/queries/profiling.html.
+ + diff --git a/compiler/revert-button.png b/compiler/revert-button.png new file mode 100644 index 000000000..7868a5335 Binary files /dev/null and b/compiler/revert-button.png differ diff --git a/compiler/reviews.html b/compiler/reviews.html new file mode 100644 index 000000000..7c26e012e --- /dev/null +++ b/compiler/reviews.html @@ -0,0 +1,377 @@ + + + + + +Every PR that lands in the compiler and its associated crates must be +reviewed by at least one person who is knowledgeable with the code in +question.
+When a PR is opened, you can request a reviewer by including r? @username
in the PR description. If you don’t do so, rustbot
+will automatically assign someone.
It is common to leave a r? @username
comment at some later point to
+request review from someone else. This will also reassign the PR.
We never merge PRs directly. Instead, we use bors. A qualified
+reviewer with bors privileges (e.g., a compiler
+contributor) will leave a comment like @bors r+
.
+This indicates that they approve the PR.
People with bors privileges may also leave a @bors r=username
+command. This indicates that the PR was already approved by
+@username
. This is commonly done after rebasing.
Finally, in some cases, PRs can be “delegated” by writing @bors delegate+
or @bors delegate=username
. This will allow the PR author
+to approve the PR by issuing @bors
commands like the ones above
+(but this privilege is limited to the single PR).
If a merged PR is found to have caused a meaningful unanticipated regression, +the best policy is to revert it quickly and re-land it later once a fix and +regression test are added.
+A “meaningful regression” in this case is up to the judgment of the person +approving the revert. As a rule of thumb, this would be a bug in a stable +or otherwise important feature that causes code to stop compiling, changes +runtime behavior, or triggers a (warn-by-default or higher) lint incorrectly in +real-world code.
+When these criteria are in doubt, and especially if real-world code is affected, +revert the PR. This allows bleeding edge users to continue to use and report +bugs on HEAD with a higher degree of certainty about where new bugs are introduced.
+Before being reverted, a PR should be shown to cause a regression with a fairly +high degree of certainty (e.g. bisection on commits, or bisection on nightlies +with one or more compiler team members pointing to this PR, or it’s simply +obvious to everyone involved). Only revert with lower certainty if the issue is +particularly critical or urgent to fix.
+The easiest method for creating a revert is to use the “Revert” button on +Github. This appears next to the “bors merged commit abcd” message on a pull +request, and creates a new pull request.
+ +Alternatively, a revert commit can be created using the git CLI and then +uploaded as a pull request:
+$ git revert -m 1 62d5bee
+
+It’s polite to tag the author and reviewer of the original PR so they know +what’s going on. You can use the following message template:
+Reverts rust-lang/rust#123456
+cc @author @reviewer
+
+This revert is based on the following report of a regression caused by this PR:
+<link to issue or comment(s)>
+
+In accordance with the compiler team [revert policy], PRs that cause meaningful
+regressions should be reverted and re-landed once the regression has been fixed
+(and a regression test has been added, where appropriate).
+[revert policy]: https://forge.rust-lang.org/compiler/reviews.html#reverts
+
+Fear not! Regressions happen. Please rest assured that this does not
+represent a negative judgment of your contribution or ability to contribute
+positively to Rust in the future. We simply want to prioritize keeping existing
+use cases working, and keep the compiler more stable for everyone.
+
+r? compiler
+
+If you have r+ privileges, you can self-approve a revert.
+Generally speaking, reverts should have elevated priority and match the rollup
+status of the PR they are reverting. If a non-rollup PR is shown to have no
+impact on performance, it can be marked rollup=always
.
Often it is tempting to address a regression by posting a follow-up PR that, +rather than reverting the regressing PR, instead augments the original in +small ways without reverting its changes overall. However, if real-world users +have reported being affected, this practice is strongly discouraged unless one +of the following is true:
+r+
it.While it can feel like a significant step backward to have your PR reverted, in +most cases it is much easier to land the PR a second time once a fix can be +confirmed. Allowing a revert to land takes pressure off of you and your +reviewers to act quickly and gives you time to address the issue fully.
+All reviewers are strongly encouraged to explicitly mark a PR as to whether or +not it should be part of a rollup with one of the following:
+rollup=always
: These PRs are very unlikely to break tests or have performance
+implications. Example scenarios:
+rollup=maybe
: This is the default if you do not specify a rollup
+status. Use this if you don’t have much confidence that it won’t break
+tests. This can be used if you aren’t sure if it should be one of the other
+categories. Since this is the default, there is usually no need to
+explicitly specify this, unless you are un-marking the rollup level from a
+previous command.rollup=iffy
: Use this for mildly risky PRs (more risky than “maybe”).
+Example scenarios:
+rollup=never
: This should never be included in a rollup (please
+include a comment explaining why you have chosen this). Example scenarios:
+++Note:
+
+@bors rollup
is equivalent to@bors rollup=always
+@bors rollup-
is equivalent to@bors rollup=never
Reviewers are encouraged to set one of the rollup statuses listed above +instead of setting priority. Bors automatically sorts based on the rollup +status (never is the highest priority, always is the lowest), and also by PR +age. If you do change the priority, please use your best judgment to balance +fairness with other PRs.
+The following is some guidance for setting priorities:
+bors privileges are binary: the bot doesn’t know which code you are +familiar with and what code you are not. They must therefore be used +with discretion. Do not r+ code that you do not know well – you can +definitely review such code, but try to hand off reviewing to +someone else for the final r+.
+Similarly, never issue a r=username
command unless that person has
+done the review, and the code has not changed substantially since the
+review was done. Rebasing is fine, but changes in functionality
+typically require re-review (though it’s a good idea to try and
+highlight what has changed, to help the reviewer).
The “steering meeting” is a weekly meeting dedicated to planning and +high-level discussion. The meeting operates on a repeating schedule:
+The first meeting of the 4-week cycle is used for planning. The +primary purpose of this meeting is to select the topics for the next +three meetings. The topics are selected from a set of topic +proposals, which must be uploaded and available for perusal before the +meeting starts. The planning meeting is also an opportunity to check +on the “overall balance” of our priorities.
+The remaining meetings are used for design or general discussion. +Weeks 2 and 3 can be used for technical or non-technical +discussion; it is also possible to use both weeks to discuss the same +topic, if that topic is complex. Week 4 is reserved for +non-technical topics, so as to ensure that we are keeping an eye on +the overall health and functioning of the team.
+The team accepts proposals via an open submission process, +which is documented on its own page
+After each planning meeting, the topics for the next three weeks are +added to the compiler-team meeting calendar and a blog post is +posted to the Inside Rust blog.
+See the compiler team meeting calendar for the canonical date and +time. The meetings take place in the #t-compiler stream on the +rust-lang Zulip.
+ +design meeting YYYY.MM.DD
topic
+@t-compiler/meeting
, ideally 1h or so before the meeting actually starts,
+to remind people@t-compiler/meeting
to let people know the meeting is startingTo guide the meeting, create a shared hackmd document everyone can +view (or adapt an existing one, if there is a write-up). Use this to +help structure the meeting, document consensus, and take live +notes. Try to ensure that the meeting ends with sort of consensus +statement, even if that consensus is just “here are the problems, here +is a space of solutions and their pros/cons, but we don’t have +consensus on which solution to take”.
+minutes/design-meeting
directory in the compiler-team
+repositorydesign meeting YYYY.MM.DD
topic
+@t-compiler/meeting
, ideally 1h or so before the meeting actually starts,
+to remind people@t-compiler/meeting
to let people know the meeting is startingTo actually make the final selection, we recommend
+For each scheduled meeting, create a calendar event:
+#t-compiler, Zulip
In the relevant issues, add the meeting-scheduled
label and add a
+message like:
In today's [planning meeting], we decided to schedule this meeting for **DATE**.
+
+[Calendar event]
+
+[planning meeting]: XXX link to Zulip topic
+[Calendar event]: XXX link to calendar event
+
+You can get the link to the calendar event by clicking on the event in +google calendar and selecting “publish”.
+Add a blog post to the Inside Rust blog using the template found on +the compiler-team repository.
+ +If you would like to submit a proposal to the steering meeting for +group discussion, read on! This page has all the details.
+In short, all you have to do is
+You don’t have to have a lot of details to start: just a few sentences +is enough. But, especially for technical design discussions, we will +typically expect that some form of more detailed overview be made +available by the time the meeting takes place.
+Here are some examples of possible technical topics that would be +suitable for the steering meeting:
+Steering meetings are also a good place to discuss other kinds of proposals:
+Note that a steering meeting is not required to create a new +working group or an out-of-tree crate, but it can be useful if the +proposal is complex or controversial, and you would like a dedicated +time to talk out the plans in more detail.
+When deciding the topics for upcoming meetings, we must balance a number of things:
+It is perfectly acceptable to choose not to schedule a particular +slot. This could happen if (e.g.) there are no proposals available or +if nothing seems important enough to discuss at this moment. Note +that, to keep the “time expectations” under control, we should +generally stick to the same 4-week cycle and simply opt to skip +meetings, rather than (e.g.) planning things at the last minute.
+Proposals can be added by opening an issue on the compiler-team +repository. There is an issue template for meeting +proposals that gives directions. The basic idea is that you open an +issue with a few sentences describing what you would like to talk +about.
+Some details that might be useful to include:
+By the time the meeting takes place, we generally would prefer to have +a more detailed write-up or proposal. You can find a template for +such a proposal here. This should be created in the form of a hackmd +document – usually we will then update this document with the minutes +and consensus from the meeting. The final notes are then stored in the +minutes directory of the compiler-team repository.
+The requirements for non-technical proposals are somewhat looser. A +few sentences or paragraphs may well suffice, if it is sufficient to +understand the aims of the discussion.
+What happens if there are not enough proposals? As noted above, +meetings are not mandatory. If there aren’t enough proposals in some +particular iteration, then we can just opt to not discuss anything.
+ +The triage meeting is a weekly meeting where we go over the open +issues, look at regressions, consider beta backports, and other such +business. In the tail end of the meeting, we also do brief check-ins +with active working groups to get an idea what they’ve been working +on.
+See the compiler team meeting calendar for the canonical date and +time. The meetings take place in the #t-compiler stream on the +rust-lang Zulip.
+The meeting procedure is documented in rust-lang/rust#54818.
+The working group check-in schedule is available on the compiler-team website.
+ +The Rust project maintains two blogs. The “main blog” (blog.rust-lang.org) and a “team blog” +(blog.rust-lang.org/inside-rust). This document provides the guidelines for what it takes to write +a post for each of those blogs, as well as how to propose a post and to choose which blog is most +appropriate.
+So you want to write a Rust blog post, and you’d like to know which blog you should post it on. +Ultimately, there are three options:
+There are two key questions to answer in deciding which of these seems right:
+In general, if you are speaking as a “private citizen”, then you are probably best off writing on +your own personal blog.
+If, however, you are writing in an official capacity, then one of the Rust blogs would be a +good fit. Note that this doesn’t mean you can’t write as an individual. Plenty of the posts on +Rust’s blog are signed by individuals, and, in fact, that is the preferred option. However, those +posts are typically documenting the official position of a team — a good example is Aaron Turon’s +classic post on Rust’s language ergonomics +initiative. Sometimes, the posts are +describing an exciting project, but again in a way that represents the project as a whole (e.g., +Manish Goregaokar’s report on Fearless Concurrency in Firefox +Quantum).
+To decide between the main blog and the team blog, the question to ask yourself is who is the +audience for your post. Posts on the main blog should be targeting all Rust users or +potential users — they tend to be lighter on technical detail, and written without requiring as +much context. Posts on the team blog can assume a lot more context and familiarity with Rust.
+The core team ultimately decides what to post on the main Rust blog.
+Post proposals describing exciting developments from within the Rust org are welcome, as well as +posts that describe exciting applications of Rust. We do not generally do “promotional +cross-posting” with other projects, however.
+If you would like to propose a blog post for the main blog, please reach out to a core team +member. It is not suggested to just open PRs +against the main Rust blog that add posts without first discussing it with a core team member.
+One special case are the regular release note posts that accompany every Rust release. These are +managed by the release team and go on the main blog.
+The blog posts are published on the same day as the release by the same person in the release team +running the release. Releases always happen on Thursdays.
+Before publishing a release post, it goes through a drafting process:
+Teams can generally decide for themselves what to write on the team Rust blog.
+Typical subjects for team Rust blog posts include:
+To propose a blog post for the team blog of a particular team, reach out to the team lead or some +other team representative.
+ +This section documents policies established by the core team. These +policies tend to apply for “project-wide resources”, such as the Rust +blogs.
+ +If we get a DMCA takedown notice, here’s what needs to happen:
+Before removing the crates, get in touch with legal support, currently by +emailing the Core team, and ask an opinion from them on the received request and +whether we have to comply with it.
+Remove it from the database:
+heroku run -a crates-io -- target/release/crates-admin delete-crate [crate-name]
+
+or
+heroku run -a crates-io -- target/release/crates-admin delete-version [crate-name] [version-number]
+
+Remove the crate or version from the index. To remove an entire crate, remove +the entire crate file. For a version, remove the line corresponding to the +relevant version.
+Remove the crate archive(s) and readme file(s) from S3.
+Invalidate the CloudFront cache:
+aws cloudfront create-invalidation --distribution-id EJED5RT0WA7HA --paths '/*'
+
+The docs.rs application supports deleting all the documentation ever published +of a crate, by running a CLI command. The people who currently have permissions +to access the server and run it are:
+You can find the documentation on how to run the command here.
+ +There are times when Heroku needs to perform a maintenance on our database +instances, for example to apply system updates or upgrade to a newer database +server.
+We must not let Heroku run maintenances during the maintenance window to +avoid disrupting production users (move the maintenance window if necessary). +This page contains the instructions on how to perform the maintenance with the +minimum amount of disruption.
+Performing maintenance on the primary database requires us to temporarily put +the application in read-only mode. Heroku performs maintenances by creating a +hidden database follower and switching over to it, so we need to prevent writes +on the primary to let the follower catch up.
+Maintenance should take less than 5 minutes of read-only time, but we should +still announce it ahead of time on our status page. This is a sample message we +can use:
+++The crates.io team will perform a database maintenance on YYYY-MM-DD from +hh:mm to hh:mm UTC.
+We expect this to take less than 5 minutes to complete. During maintenance +crates.io will only be available in read-only mode: downloading crates and +visiting the website will still work, but logging in, publishing crates, +yanking crates or changing owners will not work.
+
1 hour before the maintenance
+5 minutes before the maintenance
+Scale the background worker to 0 instances:
+heroku ps:scale -a crates-io background_worker=0
+
+At the start of the maintenance
+Update the status page with this message:
+++Scheduled maintenance on our database is starting.
+We expect this to take less than 5 minutes to complete. During maintenance +crates.io will only be available in read-only mode: downloading crates and +visiting the website will still work, but logging in, publishing crates, +yanking crates or changing owners will not work.
+
Configure the application to be in read-only mode without the follower:
+heroku config:set -a crates-io READ_ONLY_MODE=1 DB_OFFLINE=follower
+
+The follower is removed because while Heroku tries to prevent connections to +the primary database from failing during maintenance we observed that the +same does not apply to the follower database, and there could be brief +periods while the follower is not available.
+Wait for the application to be redeployed with the new configuration:
+heroku ps:wait -a crates-io
+
+Run the database maintenance:
+heroku pg:maintenance:run --force -a crates-io
+
+Wait for the maintenance to finish:
+heroku pg:wait -a crates-io
+
+Confirm all the databases are online:
+heroku pg:info -a crates-io
+
+Confirm the primary database fully recovered (should output false
):
echo "SELECT pg_is_in_recovery();" | heroku pg:psql -a crates-io DATABASE
+
+Switch off read-only mode:
+heroku config:unset -a crates-io READ_ONLY_MODE
+
+WARNING: the Heroku Dashboard’s UI is misleading when removing an +environment variable. A red badge with a “-” (minus) in it means the +variable was successfully removed, it doesn’t mean removing the variable +failed. Failures are indicated with a red badge with a “x” (cross) in it.
+Wait for the application to be redeployed with the new configuration:
+heroku ps:wait -a crates-io
+
+Update the status page and mark the maintenance as completed with this +message:
+++Scheduled maintenance finished successfully.
+
The message is posted right now and not at the end because this is when +production users are not impacted by the maintenance anymore.
+Scale the background worker up again:
+heroku ps:scale -a crates-io background_worker=1
+
+Confirm the follower database is available:
+echo "SELECT 1;" | heroku pg:psql -a crates-io READ_ONLY_REPLICA
+
+Enable connections to the follower:
+heroku config:unset -a crates-io DB_OFFLINE
+
+Re-enable the background job disabled during step 1.
+Performing maintenance on the follower database doesn’t require any external +communication nor putting the application in read-only mode, as we can just +redirect all of the follower’s traffic to the primary database. It shouldn’t be +done during peak traffic periods though, as we’ll increase the primary database +load by doing this.
+At the start of the maintenance
+Configure the application to operate without the follower:
+heroku config:set -a crates-io DB_OFFLINE=follower
+
+Wait for the application to be redeployed with the new configuration:
+heroku ps:wait -a crates-io
+
+Start the database maintenance:
+heroku pg:maintenance:run --force -a crates-io READ_ONLY_REPLICA
+
+Wait for the maintenance to finish:
+heroku pg:wait -a crates-io READ_ONLY_REPLICA
+
+Confirm the follower database is ready:
+heroku pg:info -a crates-io
+
+Confirm the follower database is responding to queries:
+echo "SELECT 1;" | heroku pg:psql -a crates-io READ_ONLY_REPLICA
+
+Enable connections to the follower:
+heroku config:unset -a crates-io DB_OFFLINE
+
+Wait for the application to be redeployed with the new configuration.
+heroku ps:wait -a crates-io
+
+This section documents the processes of the crates.io team.
+ +Redirecting to... https://rustc-dev-guide.rust-lang.org/compiler-debugging.html.
+ + diff --git a/docs-rs/add-dependencies.html b/docs-rs/add-dependencies.html new file mode 100644 index 000000000..3091c73bd --- /dev/null +++ b/docs-rs/add-dependencies.html @@ -0,0 +1,250 @@ + + + + + +Rustwide internally uses rustops/crates-build-env
as the build environment for the crate. If you want to add a system package for crates to link to, this is place you’re looking for.
Docker and docker-compose must be installed. For example, on Debian or Ubuntu:
+sudo apt-get install docker.io docker-compose
+
+First, clone the crates-build-env and the docs.rs repos:
+git clone https://github.com/rust-lang/crates-build-env
+git clone https://github.com/rust-lang/docs.rs
+
+Set the path to the directory of your crate. This must be an absolute path, not a relative path! On platforms with coreutils, you can instead use $(realpath ../relative/path)
(relative to the docs.rs directory).
YOUR_CRATE=/path/to/your/crate
+
+Next, add the package to crates-build-env/linux/packages.txt
in the correct alphabetical order. This should be the name of a package in the Ubuntu 20.04 Repositories. See the package home page for a full list/search bar, or use apt search
locally.
Now build the image. This will take a very long time, probably 10-20 minutes.
+cd crates-build-env/linux
+docker build --tag build-env .
+
+Use the image to build your crate.
+cd ../../docs.rs
+cp .env.sample .env
+docker-compose build
+# avoid docker-compose creating the volume if it doesn't exist
+if [ -e "$YOUR_CRATE" ]; then
+ docker-compose run -e DOCSRS_DOCKER_IMAGE=build-env \
+ -e RUST_BACKTRACE=1 \
+ -v "$YOUR_CRATE":/opt/rustwide/workdir \
+ web build crate --local /opt/rustwide/workdir
+else
+ echo "$YOUR_CRATE does not exist";
+fi
+
+If your build fails even after your changes, it will be annoying to rebuild the image from scratch just to add a single package. Instead, you can make changes directly to the Dockerfile so that the existing packages are cached. Be sure to move these new packages from the Dockerfile to packages.txt
once you are sure they work.
On line 7 of the Dockerfile, add this line: RUN apt-get install -y your_second_package
.
+Rerun the build and start the container; it should take much less time now:
cd ../crates-build-env/linux
+docker build --tag build-env .
+cd ../../docs.rs
+docker-compose run -e DOCSRS_DOCKER_IMAGE=build-env \
+ -e RUST_BACKTRACE=1 \
+ -v "$YOUR_CRATE":/opt/rustwide/workdir \
+ web build crate --local /opt/rustwide/workdir
+
+Before you make a PR, run the shell script lint.sh
and make sure it passes. It ensures packages.txt
is in order and will tell you exactly what changes you need to make if not.
cd ../crates-build-env
+./lint.sh
+
+Once you are sure your package builds, you can make a pull request to get it adopted upstream for docs.rs and crater. Go to https://github.com/rust-lang/crates-build-env and click ‘Fork’ in the top right. Locally, add your fork as a remote in git and push your changes:
+git remote add personal https://github.com/<your_username_here>/crates-build-env
+git add -u
+git commit -m 'add packages necessary for <your_package_here> to compile'
+git push personal
+
+Back on github, make a pull request:
+Hopefully your changes will be merged quickly! After that you can either publish a point release (rebuilds your docs immediately) or request for a member of the docs.rs team to schedule a new build (may take a while depending on their schedules).
+ +docs.rs is a website that hosts documentation for crates published to crates.io.
+docsrs.infra.rust-lang.org
(behind the bastion – how to connect)It might happen that a crate fails to build repeatedly due to a docs.rs bug, +clogging up the queue and preventing other crates to build. In this case it’s +possible to temporarily remove the crate from the queue until the docs.rs’s bug +is fixed. To do that, log into the machine and open a PostgreSQL shell with:
+$ psql
+
+Then you can run this SQL query to remove the crate:
+UPDATE queue SET attempt = 100 WHERE name = '<CRATE_NAME>';
+
+To add the crate back in the queue you can run in the PostgreSQL shell this +query:
+UPDATE queue SET attempt = 0 WHERE name = '<CRATE_NAME>';
+
+Sometimes the latest nightly might be broken, causing doc builds to fail. In
+those cases it’s possible to tell docs.rs to stop updating to the latest
+nightly and instead pin a specific release. To do that you need to edit the
+/home/cratesfyi/.docs-rs-env
file, adding or changing this environment
+variable:
CRATESFYI_TOOLCHAIN=nightly-YYYY-MM-DD
+
+Once the file changed docs.rs needs to be restarted:
+systemctl restart docs.rs
+
+To return to the latest nightly simply remove the environment variable and +restart docs.rs again.
+If a bug was recently fixed, you may want to rebuild a crate so that it builds with the latest version. +From the docs.rs machine:
+cratesfyi queue add <crate> <version>
+
+This will add the crate with a lower priority than new crates by default, you can change the priority with the -p
option.
Occasionally crates will ask for their build limits to be raised.
+You can raise them from the docs.rs machine with psql
.
Raising a memory limit to 8 GB:
+# memory is measured in bytes
+cratesfyi=> INSERT INTO sandbox_overrides (crate_name, max_memory_bytes)
+ VALUES ('crate name', 8589934592);
+
+Raising a timeout to 15 minutes:
+cratesfyi=> INSERT INTO sandbox_overrides (crate_name, timeout_seconds)
+ VALUES ('crate name', 900);
+
+Raising limits for multiple crates at once:
+cratesfyi=> INSERT INTO sandbox_overrides (crate_name, max_memory_bytes)
+ VALUES ('stm32f4', 8589934592), ('stm32h7', 8589934592), ('stm32g4', 8589934592);
+
+When many crates from the same project are published at once, they take up a +lot of space in the queue. You can de-prioritize groups of crates at once like +this:
+cratesfyi=> INSERT INTO crate_priorities (pattern, priority)
+ VALUES ('group-%', 1);
+
+The pattern
should be a LIKE
pattern as documented on
+https://www.postgresql.org/docs/current/functions-matching.html.
Note that this only sets the default priority for crates with that name. +If there are crates already in the queue, you’ll have to update those manually:
+cratesfyi=> UPDATE queue SET priority = 1 WHERE name LIKE 'group-%';
+
+After an outage you might want to add all the failed builds back to the queue. +To do that, log into the machine and open a PostgreSQL shell with:
+psql
+
+Then you can run this SQL query to add all the crates failed after YYYY-MM-DD HH:MM:SS
back in the queue:
UPDATE queue SET attempt = 0 WHERE attempt >= 5 AND build_time > 'YYYY-MM-DD HH:MM:SS';
+
+Sometimes it might be needed to remove all the content related to a crate from +docs.rs (for example after receiving a DMCA). To do that, log into the server +and run:
+cratesfyi database delete-crate CRATE_NAME
+
+The command will remove all the data from the database, and then remove the +files from S3.
+Occasionally it might be needed to prevent a crate from being built on docs.rs, +for example if we can’t legally host the content of those crates. To add a +crate to the blacklist, preventing new builds for it, you can run:
+cratesfyi database blacklist add <CRATE_NAME>
+
+Other operations (such as list
and remove
) are also supported.
++ +Warning: blacklisting a crate doesn’t remove existing content from the +website, it just prevents new versions from being built!
+
These are instructions for deploying the server in a production environment. For instructions on developing locally without docker-compose, see Developing without docker-compose.
+Here is a breakdown of what it takes to turn a regular server into its own version of docs.rs.
+Beware: This process is rather rough! Attempts at cleaning it up, automating setup components, etc, would be greatly appreciated!
+The commands and package names on this page will assume an Ubuntu server running systemd, but hopefully the explanatory text should give enough information to adapt to other systems. Note that docs.rs depends on the host being x86_64-unknown-linux-gnu
.
Docs.rs has a few basic requirements:
+rustup
)pkg-config
(to build dependencies for crates and docs.rs itself)libmagic
(to link against)$ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain nightly
+$ source $HOME/.cargo/env
+# apt install build-essential git curl cmake gcc g++ pkg-config libmagic-dev libssl-dev zlib1g-dev postgresql lxc-utils
+
+cratesfyi
userTo help things out later on, we can create a new unprivileged user that will run the server process. This user will own all the files required by the docs.rs process. This user will need to be able to run lxc-attach
through sudo
to be able to run docs builds, so give it a sudoers file at the same time:
# adduser --disabled-login --disabled-password --gecos "" cratesfyi
+# echo 'cratesfyi ALL=(ALL) NOPASSWD: /usr/bin/lxc-attach' > /etc/sudoers.d/cratesfyi
+
+(The name cratesfyi
is a historical one: Before the site was called “docs.rs”, it was called “crates.fyi” instead. If you want to update the name of the user, feel free! Just be aware that the name cratesfyi
will be used throughout this document.)
In addition to the LXC container, docs.rs also stores several related files in a “prefix” directory. This directory can be stored anywhere, but the cratesfyi
user needs to be able to access it:
# mkdir /cratesfyi-prefix
+# chown cratesfyi:cratesfyi /cratesfyi-prefix
+
+Now we can set up some required folders. To make sure they all have proper ownership, run them all as cratesfyi
:
$ sudo -u cratesfyi mkdir -vp /cratesfyi-prefix/documentations /cratesfyi-prefix/public_html /cratesfyi-prefix/sources
+$ sudo -u cratesfyi git clone https://github.com/rust-lang/crates.io-index.git /cratesfyi-prefix/crates.io-index
+$ sudo -u cratesfyi git --git-dir=/cratesfyi-prefix/crates.io-index/.git branch crates-index-diff_last-seen
+
+(That last command is used to set up the crates-index-diff
crate, so we can start monitoring new crate releases.)
To help contain what crates’ build scripts can access, documentation builds run inside an LXC container. To create one inside the prefix directory:
+# LANG=C lxc-create -n cratesfyi-container -P /cratesfyi-prefix -t download -- --dist ubuntu --release bionic --arch amd64
+# ln -s /cratesfyi-prefix/cratesfyi-container /var/lib/lxc
+# chmod 755 /cratesfyi-prefix/cratesfyi-container
+# chmod 755 /var/lib/lxc
+
+(To make deployment simpler, it’s important that the OS the container is using is the same as the host! In this case, the host is assumed to be running 64-bit Ubuntu 18.04. If you make the container use a different release or distribution, you’ll need to build docs.rs separately inside the container when deploying.)
+You’ll also need to configure networking for the container. The following is a sample /etc/default/lxc-net
that enables NAT networking for the container:
USE_LXC_BRIDGE="true"
+LXC_BRIDGE="lxcbr0"
+LXC_ADDR="10.0.3.1"
+LXC_NETMASK="255.255.255.0"
+LXC_NETWORK="10.0.3.0/24"
+LXC_DHCP_RANGE="10.0.3.2,10.0.3.254"
+LXC_DHCP_MAX="253"
+LXC_DHCP_CONFILE=""
+LXC_DOMAIN=""
+
+In addition, you’ll need to set the container’s configuration to use this. Add the following lines to /cratesfyi-prefix/cratesfyi-container/config
:
lxc.net.0.type = veth
+lxc.net.0.link = lxcbr0
+
+Now you can reload the LXC network configuration, start up the container, and set it up to auto-start when the host boots:
+# systemctl restart lxc-net
+# systemctl enable lxc@cratesfyi-container.service
+# systemctl start lxc@cratesfyi-container.service
+
+Now we need to do some setup inside this container. You can either copy all these commands so that each one attaches on its own, or you can run lxc-console -n cratesfyi-container
to open a root shell inside the container and skip the lxc-attach
prefix.
# lxc-attach -n cratesfyi-container -- apt update
+# lxc-attach -n cratesfyi-container -- apt upgrade
+# lxc-attach -n cratesfyi-container -- apt install curl ca-certificates binutils gcc libc6-dev libmagic1 pkg-config build-essential
+
+Inside the container, we also need to set up a cratesfyi
user, and install Rust for it. In addition to the base Rust installation, we also need to install all the default targets so that we can build docs for all the Tier 1 platforms. The Rust compiler installed inside the container is the one that builds all the docs, so if you want to use a new Rustdoc feature, this is the compiler to update.
lxc-attach -n cratesfyi-container -- adduser --disabled-login --disabled-password --gecos "" cratesfyi
+lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain nightly'
+lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup target add i686-apple-darwin'
+lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup target add i686-pc-windows-msvc'
+lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup target add i686-unknown-linux-gnu'
+lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup target add x86_64-apple-darwin'
+lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup target add x86_64-pc-windows-msvc'
+
+Now that we have Rust installed inside the container, we can use a trick to give the cratesfyi
user on the host the same Rust compiler as the container. By symlinking the following directories into its user directory, we don’t need to track a third toolchain.
for directory in .cargo .rustup .multirust; do [[ -h /home/cratesfyi/$directory ]] || sudo -u cratesfyi ln -vs /var/lib/lxc/cratesfyi-container/rootfs/home/cratesfyi/$directory /home/cratesfyi/; done
+
+cratesfyi
userTo ensure that the docs.rs server is configured properly, we need to set a few environment variables. The primary ones are going into a separate environment file, so we can load them into the systemd service that will manage the server.
+Write the following into /home/cratesfyi/.cratesfyi.env
. If you have a GitHub access token that the site can use to collect repository information, add it here, but otherwise leave it blank. The variables need to exist, but they can be blank to skip that collection.
CRATESFYI_PREFIX=/cratesfyi-prefix
+CRATESFYI_DATABASE_URL=postgresql://cratesfyi:password@localhost
+CRATESFYI_CONTAINER_NAME=cratesfyi-container
+CRATESFYI_GITHUB_USERNAME=
+CRATESFYI_GITHUB_ACCESSTOKEN=
+RUST_LOG=cratesfyi
+
+Now add the following to /home/cratesfyi/.profile
:
export $(cat $HOME/.cratesfyi.env | xargs -d '\n')
+export PATH="$HOME/.cargo/bin:$PATH"
+export PATH="$PATH:$HOME/docs.rs/target/release"
+
+Now we can actually clone and build the docs.rs source! The location of it doesn’t matter much, but again, we want it to be owned by cratesfyi
so it can build and run the final executable. In addition, we copy the built cratesfyi
binary into the container so that it can be used to arrange builds on the inside.
sudo -u cratesfyi git clone https://github.com/rust-lang-nursery/docs.rs.git ~cratesfyi/docs.rs
+sudo su - cratesfyi -c 'cd ~/docs.rs && cargo build --release'
+cp -v /home/cratesfyi/docs.rs/target/release/cratesfyi /var/lib/lxc/cratesfyi-container/rootfs/usr/local/bin
+
+Now that we have the repository built, we can use it to set up the database. Docs.rs uses a Postgres database to store information about crates and their documentation. To set one up, we first need to ask Postgres to create the database, and then run the docs.rs command to create the initial tables and content:
+sudo -u postgres sh -c "psql -c \"CREATE USER cratesfyi WITH PASSWORD 'password';\""
+sudo -u postgres sh -c "psql -c \"CREATE DATABASE cratesfyi OWNER cratesfyi;\""
+sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- database init"
+sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build add-essential-files"
+sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build crate rand 0.5.5"
+sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- database update-search-index"
+sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- database update-release-activity"
+
+We’re almost there! At this point, we’ve got all the pieces in place to run the site. Now we can set up a systemd service that will run the daemon that will collect crate information, orchestrate builds, and serve the website. The following systemd service file can be placed in /etc/systemd/system/cratesfyi.service
:
[Unit]
+Description=Cratesfyi daemon
+After=network.target postgresql.service
+
+[Service]
+User=cratesfyi
+Group=cratesfyi
+Type=forking
+PIDFile=/cratesfyi-prefix/cratesfyi.pid
+EnvironmentFile=/home/cratesfyi/.cratesfyi.env
+ExecStart=/home/cratesfyi/docs.rs/target/release/cratesfyi daemon
+WorkingDirectory=/home/cratesfyi/docs.rs
+
+[Install]
+WantedBy=multi-user.target
+
+Enabling and running that will serve the website on http://localhost:3000
, so if you want to route public traffic to it, you’ll need to set up something like nginx to proxy the connections to it.
If you want to update the Rust compiler used to build crates (and the Rustdoc that comes with it), you need to make sure you don’t interrupt any existing crate builds. The daemon waits for 60 seconds between checking for new crates, so you need to make sure you catch it during that window. Since we hooked the daemon into systemd, the logs will be available in its journal. Running journalctl -efu cratesfyi
(it may need to be run as root if nothing appears) will show the latest log output and show new entries as they appear. You’re looking for a message like “Finished building new crates, going back to sleep” or “Queue is empty, going back to sleep”, which indicates that the crate-building thread is waiting.
To prevent the queue from building more crates, run the following:
+sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build lock"
+
+This will create a lock file in the prefix directory that will prevent more crates from being built. At this point, you can update the rustc inside the container and add the rustdoc static files to the database:
+lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup update'
+sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build add-essential-files"
+
+Once this is done, you can unlock the queue to allow crates to build again:
+sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build unlock"
+
+And we’re done! New crates will start being built with the new rustc. If you want to rebuild any existing docs with the new rustdoc, you need to manually build them - there’s no automated way to rebuild failed docs or docs from a certain rust version yet.
+To update the code for docs.rs itself, you can follow a similar approach. First, watch the logs so you can stop the daemon from building more crates. (You can replace the lock command with a systemctl stop cratesfyi
if you don’t mind the web server being down while you build.)
# journalctl -efu cratesfyi
+(wait for build daemon to sleep)
+$ sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build lock"
+
+Once the daemon has stopped, you can start updating the code and rebuilding:
+$ sudo su - cratesfyi -c "cd ~/docs.rs && git pull"
+$ sudo su - cratesfyi -c "cd ~/docs.rs && cargo build --release"
+
+Now that we have a shiny new build, we need to make sure the service is using it:
+# cp -v /home/cratesfyi/docs.rs/target/release/cratesfyi /var/lib/lxc/cratesfyi-container/rootfs/usr/local/bin
+# systemctl restart cratesfyi
+
+Next, we can unlock the builder so it can start checking new crates:
+$ sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build unlock"
+
+And we’re done! Changes to the site or the build behavior should be visible now.
+ +Redirecting to... https://rustc-dev-guide.rust-lang.org/implementing_new_features.html.
+ + diff --git a/fonts/OPEN-SANS-LICENSE.txt b/fonts/OPEN-SANS-LICENSE.txt new file mode 100644 index 000000000..d64569567 --- /dev/null +++ b/fonts/OPEN-SANS-LICENSE.txt @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/fonts/SOURCE-CODE-PRO-LICENSE.txt b/fonts/SOURCE-CODE-PRO-LICENSE.txt new file mode 100644 index 000000000..366206f54 --- /dev/null +++ b/fonts/SOURCE-CODE-PRO-LICENSE.txt @@ -0,0 +1,93 @@ +Copyright 2010, 2012 Adobe Systems Incorporated (http://www.adobe.com/), with Reserved Font Name 'Source'. All Rights Reserved. Source is a trademark of Adobe Systems Incorporated in the United States and/or other countries. + +This Font Software is licensed under the SIL Open Font License, Version 1.1. +This license is copied below, and is also available with a FAQ at: +http://scripts.sil.org/OFL + + +----------------------------------------------------------- +SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007 +----------------------------------------------------------- + +PREAMBLE +The goals of the Open Font License (OFL) are to stimulate worldwide +development of collaborative font projects, to support the font creation +efforts of academic and linguistic communities, and to provide a free and +open framework in which fonts may be shared and improved in partnership +with others. + +The OFL allows the licensed fonts to be used, studied, modified and +redistributed freely as long as they are not sold by themselves. The +fonts, including any derivative works, can be bundled, embedded, +redistributed and/or sold with any software provided that any reserved +names are not used by derivative works. The fonts and derivatives, +however, cannot be released under any other type of license. The +requirement for fonts to remain under this license does not apply +to any document created using the fonts or their derivatives. + +DEFINITIONS +"Font Software" refers to the set of files released by the Copyright +Holder(s) under this license and clearly marked as such. This may +include source files, build scripts and documentation. + +"Reserved Font Name" refers to any names specified as such after the +copyright statement(s). + +"Original Version" refers to the collection of Font Software components as +distributed by the Copyright Holder(s). + +"Modified Version" refers to any derivative made by adding to, deleting, +or substituting -- in part or in whole -- any of the components of the +Original Version, by changing formats or by porting the Font Software to a +new environment. + +"Author" refers to any designer, engineer, programmer, technical +writer or other person who contributed to the Font Software. + +PERMISSION & CONDITIONS +Permission is hereby granted, free of charge, to any person obtaining +a copy of the Font Software, to use, study, copy, merge, embed, modify, +redistribute, and sell modified and unmodified copies of the Font +Software, subject to the following conditions: + +1) Neither the Font Software nor any of its individual components, +in Original or Modified Versions, may be sold by itself. + +2) Original or Modified Versions of the Font Software may be bundled, +redistributed and/or sold with any software, provided that each copy +contains the above copyright notice and this license. These can be +included either as stand-alone text files, human-readable headers or +in the appropriate machine-readable metadata fields within text or +binary files as long as those fields can be easily viewed by the user. + +3) No Modified Version of the Font Software may use the Reserved Font +Name(s) unless explicit written permission is granted by the corresponding +Copyright Holder. This restriction only applies to the primary font name as +presented to the users. + +4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font +Software shall not be used to promote, endorse or advertise any +Modified Version, except to acknowledge the contribution(s) of the +Copyright Holder(s) and the Author(s) or with their explicit written +permission. + +5) The Font Software, modified or unmodified, in part or in whole, +must be distributed entirely under this license, and must not be +distributed under any other license. The requirement for fonts to +remain under this license does not apply to any document created +using the Font Software. + +TERMINATION +This license becomes null and void if any of the above conditions are +not met. + +DISCLAIMER +THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT +OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE +COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL +DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING +FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM +OTHER DEALINGS IN THE FONT SOFTWARE. diff --git a/fonts/fonts.css b/fonts/fonts.css new file mode 100644 index 000000000..858efa598 --- /dev/null +++ b/fonts/fonts.css @@ -0,0 +1,100 @@ +/* Open Sans is licensed under the Apache License, Version 2.0. See http://www.apache.org/licenses/LICENSE-2.0 */ +/* Source Code Pro is under the Open Font License. See https://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&id=OFL */ + +/* open-sans-300 - latin_vietnamese_latin-ext_greek-ext_greek_cyrillic-ext_cyrillic */ +@font-face { + font-family: 'Open Sans'; + font-style: normal; + font-weight: 300; + src: local('Open Sans Light'), local('OpenSans-Light'), + url('open-sans-v17-all-charsets-300.woff2') format('woff2'); +} + +/* open-sans-300italic - latin_vietnamese_latin-ext_greek-ext_greek_cyrillic-ext_cyrillic */ +@font-face { + font-family: 'Open Sans'; + font-style: italic; + font-weight: 300; + src: local('Open Sans Light Italic'), local('OpenSans-LightItalic'), + url('open-sans-v17-all-charsets-300italic.woff2') format('woff2'); +} + +/* open-sans-regular - latin_vietnamese_latin-ext_greek-ext_greek_cyrillic-ext_cyrillic */ +@font-face { + font-family: 'Open Sans'; + font-style: normal; + font-weight: 400; + src: local('Open Sans Regular'), local('OpenSans-Regular'), + url('open-sans-v17-all-charsets-regular.woff2') format('woff2'); +} + +/* open-sans-italic - latin_vietnamese_latin-ext_greek-ext_greek_cyrillic-ext_cyrillic */ +@font-face { + font-family: 'Open Sans'; + font-style: italic; + font-weight: 400; + src: local('Open Sans Italic'), local('OpenSans-Italic'), + url('open-sans-v17-all-charsets-italic.woff2') format('woff2'); +} + +/* open-sans-600 - latin_vietnamese_latin-ext_greek-ext_greek_cyrillic-ext_cyrillic */ +@font-face { + font-family: 'Open Sans'; + font-style: normal; + font-weight: 600; + src: local('Open Sans SemiBold'), local('OpenSans-SemiBold'), + url('open-sans-v17-all-charsets-600.woff2') format('woff2'); +} + +/* open-sans-600italic - latin_vietnamese_latin-ext_greek-ext_greek_cyrillic-ext_cyrillic */ +@font-face { + font-family: 'Open Sans'; + font-style: italic; + font-weight: 600; + src: local('Open Sans SemiBold Italic'), local('OpenSans-SemiBoldItalic'), + url('open-sans-v17-all-charsets-600italic.woff2') format('woff2'); +} + +/* open-sans-700 - latin_vietnamese_latin-ext_greek-ext_greek_cyrillic-ext_cyrillic */ +@font-face { + font-family: 'Open Sans'; + font-style: normal; + font-weight: 700; + src: local('Open Sans Bold'), local('OpenSans-Bold'), + url('open-sans-v17-all-charsets-700.woff2') format('woff2'); +} + +/* open-sans-700italic - latin_vietnamese_latin-ext_greek-ext_greek_cyrillic-ext_cyrillic */ +@font-face { + font-family: 'Open Sans'; + font-style: italic; + font-weight: 700; + src: local('Open Sans Bold Italic'), local('OpenSans-BoldItalic'), + url('open-sans-v17-all-charsets-700italic.woff2') format('woff2'); +} + +/* open-sans-800 - latin_vietnamese_latin-ext_greek-ext_greek_cyrillic-ext_cyrillic */ +@font-face { + font-family: 'Open Sans'; + font-style: normal; + font-weight: 800; + src: local('Open Sans ExtraBold'), local('OpenSans-ExtraBold'), + url('open-sans-v17-all-charsets-800.woff2') format('woff2'); +} + +/* open-sans-800italic - latin_vietnamese_latin-ext_greek-ext_greek_cyrillic-ext_cyrillic */ +@font-face { + font-family: 'Open Sans'; + font-style: italic; + font-weight: 800; + src: local('Open Sans ExtraBold Italic'), local('OpenSans-ExtraBoldItalic'), + url('open-sans-v17-all-charsets-800italic.woff2') format('woff2'); +} + +/* source-code-pro-500 - latin_vietnamese_latin-ext_greek_cyrillic-ext_cyrillic */ +@font-face { + font-family: 'Source Code Pro'; + font-style: normal; + font-weight: 500; + src: url('source-code-pro-v11-all-charsets-500.woff2') format('woff2'); +} diff --git a/fonts/open-sans-v17-all-charsets-300.woff2 b/fonts/open-sans-v17-all-charsets-300.woff2 new file mode 100644 index 000000000..9f51be370 Binary files /dev/null and b/fonts/open-sans-v17-all-charsets-300.woff2 differ diff --git a/fonts/open-sans-v17-all-charsets-300italic.woff2 b/fonts/open-sans-v17-all-charsets-300italic.woff2 new file mode 100644 index 000000000..2f5454484 Binary files /dev/null and b/fonts/open-sans-v17-all-charsets-300italic.woff2 differ diff --git a/fonts/open-sans-v17-all-charsets-600.woff2 b/fonts/open-sans-v17-all-charsets-600.woff2 new file mode 100644 index 000000000..f503d558d Binary files /dev/null and b/fonts/open-sans-v17-all-charsets-600.woff2 differ diff --git a/fonts/open-sans-v17-all-charsets-600italic.woff2 b/fonts/open-sans-v17-all-charsets-600italic.woff2 new file mode 100644 index 000000000..c99aabe80 Binary files /dev/null and b/fonts/open-sans-v17-all-charsets-600italic.woff2 differ diff --git a/fonts/open-sans-v17-all-charsets-700.woff2 b/fonts/open-sans-v17-all-charsets-700.woff2 new file mode 100644 index 000000000..421a1ab25 Binary files /dev/null and b/fonts/open-sans-v17-all-charsets-700.woff2 differ diff --git a/fonts/open-sans-v17-all-charsets-700italic.woff2 b/fonts/open-sans-v17-all-charsets-700italic.woff2 new file mode 100644 index 000000000..12ce3d20d Binary files /dev/null and b/fonts/open-sans-v17-all-charsets-700italic.woff2 differ diff --git a/fonts/open-sans-v17-all-charsets-800.woff2 b/fonts/open-sans-v17-all-charsets-800.woff2 new file mode 100644 index 000000000..c94a223b0 Binary files /dev/null and b/fonts/open-sans-v17-all-charsets-800.woff2 differ diff --git a/fonts/open-sans-v17-all-charsets-800italic.woff2 b/fonts/open-sans-v17-all-charsets-800italic.woff2 new file mode 100644 index 000000000..eed7d3c63 Binary files /dev/null and b/fonts/open-sans-v17-all-charsets-800italic.woff2 differ diff --git a/fonts/open-sans-v17-all-charsets-italic.woff2 b/fonts/open-sans-v17-all-charsets-italic.woff2 new file mode 100644 index 000000000..398b68a08 Binary files /dev/null and b/fonts/open-sans-v17-all-charsets-italic.woff2 differ diff --git a/fonts/open-sans-v17-all-charsets-regular.woff2 b/fonts/open-sans-v17-all-charsets-regular.woff2 new file mode 100644 index 000000000..8383e94c6 Binary files /dev/null and b/fonts/open-sans-v17-all-charsets-regular.woff2 differ diff --git a/fonts/source-code-pro-v11-all-charsets-500.woff2 b/fonts/source-code-pro-v11-all-charsets-500.woff2 new file mode 100644 index 000000000..722245682 Binary files /dev/null and b/fonts/source-code-pro-v11-all-charsets-500.woff2 differ diff --git a/fott.html b/fott.html new file mode 100644 index 000000000..67d271296 --- /dev/null +++ b/fott.html @@ -0,0 +1,12 @@ + + + + +Redirecting to... /archive/fott.html.
+ + diff --git a/github.html b/github.html new file mode 100644 index 000000000..2e993280e --- /dev/null +++ b/github.html @@ -0,0 +1,12 @@ + + + + +Redirecting to... platforms/github.html.
+ + diff --git a/governance/index.html b/governance/index.html new file mode 100644 index 000000000..260ccc06f --- /dev/null +++ b/governance/index.html @@ -0,0 +1,599 @@ + + + + + ++IMPORTANT This document is adapted from +RFC 1068 +and is currently being actively worked on, however there may be large parts of +Rust's governance that are missing, incomplete, or out of date. +
+The core team serves as leadership for the Rust project as a whole. In +particular, it:
+Sets the overall direction and vision for the project. That means setting +the core values that are used when making decisions about technical +tradeoffs. It means steering the project toward specific use cases where Rust +can have a major impact. It means leading the discussion, and writing RFCs +for, major initiatives in the project.
+Sets the priorities and release schedule. Design bandwidth is limited, and +it’s dangerous to try to grow the language too quickly; the core team makes +some difficult decisions about which areas to prioritize for new design, based +on the core values and target use cases.
+Focuses on broad, cross-cutting concerns. The core team is specifically +designed to take a global view of the project, to make sure the pieces are +fitting together in a coherent way.
+Spins up or shuts down subteams. Over time, we may want to expand the set +of subteams, and it may make sense to have temporary “strike teams” that focus +on a particular, limited task.
+Decides whether/when to ungate a feature. While the subteams make +decisions on RFCs, the core team is responsible for pulling the trigger that +moves a feature from nightly to stable. This provides an extra check that +features have adequately addressed cross-cutting concerns, that the +implementation quality is high enough, and that language/library commitments +are reasonable.
+The core team should include both the subteam leaders, and, over time, a diverse +set of other stakeholders that are both actively involved in the Rust community, +and can speak to the needs of major Rust constituencies, to ensure that the +project is addressing real-world needs.
+The primary roles of each subteam are:
+Shepherding RFCs for the subteam area. As always, that means (1) ensuring that +stakeholders are aware of the RFC, (2) working to tease out various design +tradeoffs and alternatives, and (3) helping build consensus.
+Accepting or rejecting RFCs in the subteam area.
+Setting policy on what changes in the subteam area require RFCs, and reviewing +direct PRs for changes that do not require an RFC.
+Delegating reviewer rights for the subteam area. The ability to r+
is not
+limited to team members, and in fact earning r+
rights is a good stepping
+stone toward team membership. Each team should set reviewing policy, manage
+reviewing rights, and ensure that reviews take place in a timely manner.
+(Thanks to Nick Cameron for this suggestion.)
Subteams make it possible to involve a larger, more diverse group in the +decision-making process. In particular, they should involve a mix of:
+Rust project leadership, in the form of at least one core team member (the +leader of the subteam).
+Area experts: people who have a lot of interest and expertise in the subteam +area, but who may be far less engaged with other areas of the project.
+Stakeholders: people who are strongly affected by decisions in the +subteam area, but who may not be experts in the design or +implementation of that area. It is crucial that some people heavily +using Rust for applications/libraries have a seat at the table, to +make sure we are actually addressing real-world needs.
+Members should have demonstrated a good sense for design and dealing with +tradeoffs, an ability to work within a framework of consensus, and of course +sufficient knowledge about or experience with the subteam area. Leaders should +in addition have demonstrated exceptional communication, design, and people +skills. They must be able to work with a diverse group of people and help lead +it toward consensus and execution.
+Each subteam is led by a member of the core team. The leader is responsible for:
+Setting up the subteam:
+Deciding on the initial membership of the subteam (in consultation with +the core team). Once the subteam is up and running.
+Working with subteam members to determine and publish subteam policies and +mechanics, including the way that subteam members join or leave the team +(which should be based on subteam consensus).
+Communicating core team vision downward to the subteam.
+Alerting the core team to subteam RFCs that need global, cross-cutting +attention, and to RFCs that have entered the “final comment period” (see below).
+Ensuring that RFCs and PRs are progressing at a reasonable rate, re-assigning +shepherds/reviewers as needed.
+Making final decisions in cases of contentious RFCs that are unable to reach +consensus otherwise (should be rare).
+The way that subteams communicate internally and externally is left to each +subteam to decide, but:
+Technical discussion should take place as much as possible on public forums, +ideally on RFC/PR threads and tagged discuss posts.
+Each subteam will have a dedicated +internals forum tag.
+Subteams should actively seek out discussion and input from stakeholders who +are not members of the team.
+Subteams should have some kind of regular meeting or other way of making +decisions. The content of this meeting should be summarized with the rationale +for each decision – and, as explained below, decisions should generally be +about weighting a set of already-known tradeoffs, not discussing or +discovering new rationale.
+Subteams should regularly publish the status of RFCs, PRs, and other news +related to their area. Ideally, this would be done in part via a dashboard +like the Homu queue.
+Rust has long used a form of consensus decision-making. In a +nutshell the premise is that a successful outcome is not where one side of a +debate has “won”, but rather where concerns from all sides have been addressed +in some way. This emphatically does not entail design by committee, nor +compromised design. Rather, it’s a recognition that
+++… every design or implementation choice carries a trade-off and numerous +costs. There is seldom a right answer.
+
Breakthrough designs sometimes end up changing the playing field by eliminating +tradeoffs altogether, but more often difficult decisions have to be made. The +key is to have a clear vision and set of values and priorities, which is the +core team’s responsibility to set and communicate, and the subteam’s +responsibility to act upon.
+Whenever possible, we seek to reach consensus through discussion and design +revision. Concretely, the steps are:
+Consensus is reached when most people are left with only “minor” objections, +i.e., while they might choose the tradeoffs slightly differently they do not +feel a strong need to actively block the RFC from progressing.
+One important question is: consensus among which people, exactly? Of course, the +broader the consensus, the better. But at the very least, consensus within the +members of the subteam should be the norm for most decisions. If the core team +has done its job of communicating the values and priorities, it should be +possible to fit the debate about the RFC into that framework and reach a fairly +clear outcome.
+In some cases, though, consensus cannot be reached. These cases tend to split +into two very different camps:
+“Trivial” reasons, e.g., there is not widespread agreement about naming, but +there is consensus about the substance.
+“Deep” reasons, e.g., the design fundamentally improves one set of concerns at +the expense of another, and people on both sides feel strongly about it.
+In either case, an alternative form of decision-making is needed.
+For the “trivial” case, usually either the RFC shepherd or subteam leader will +make an executive decision.
+For the “deep” case, the subteam leader is empowered to make a final decision, +but should consult with the rest of the core team before doing so.
+Each RFC has a shepherd drawn from the relevant subteam. The shepherd is +responsible for driving the consensus process – working with both the RFC +author and the broader community to dig out problems, alternatives, and improved +design, always working to reach broader consensus.
+At some point, the RFC comments will reach a kind of “steady state”, where no +new tradeoffs are being discovered, and either objections have been addressed, +or it’s clear that the design has fundamental downsides that need to be weighed.
+At that point, the shepherd will announce that the RFC is in a “final comment +period” (which lasts for one week). This is a kind of “last call” for strong +objections to the RFC. The announcement of the final comment period for an RFC +should be very visible; it should be included in the subteam’s periodic +communications.
+++Note that the final comment period is in part intended to help keep RFCs +moving. Historically, RFCs sometimes stall out at a point where discussion has +died down but a decision isn’t needed urgently. In this proposed model, the +RFC author could ask the shepherd to move to the final comment period (and +hence toward a decision).
+
After the final comment period, the subteam can make a decision on the RFC. The +role of the subteam at that point is not to reveal any new technical issues or +arguments; if these come up during discussion, they should be added as comments +to the RFC, and it should undergo another final comment period.
+Instead, the subteam decision is based on weighing the already-revealed +tradeoffs against the project’s priorities and values (which the core team is +responsible for setting, globally). In the end, these decisions are about how to +weight tradeoffs. The decision should be communicated in these terms, pointing +out the tradeoffs that were raised and explaining how they were weighted, and +never introducing new arguments.
+In addition to the “final comment period” proposed above, this RFC proposes some +further adjustments to the RFC process to keep it lightweight.
+A key observation is that, thanks to the stability system and nightly/stable +distinction, it’s easy to experiment with features without commitment.
+Over time, we’ve been drifting toward requiring an RFC for essentially any +user-facing change, which sometimes means that very minor changes get stuck +awaiting an RFC decision. While subteams + final comment period should help keep +the pipeline flowing a bit better, it would also be good to allow “minor” +changes to go through without an RFC, provided there is sufficient review in +some other way. (And in the end, the core team ungates features, which ensures +at least a final review.)
+This RFC does not attempt to answer the question “What needs an RFC”, because +that question will vary for each subteam. However, this RFC stipulates that each +subteam should set an explicit policy about:
+These guidelines should try to keep the process lightweight for minor changes.
+While RFCs are very important, they do not represent the final state of a +design. Often new issues or improvements arise during implementation, or after +gaining some experience with a feature. The nightly/stable distinction exists +in part to allow for such design iteration.
+Thus RFCs do not need to be “perfect” before acceptance. If consensus is reached +on major points, the minor details can be left to implementation and revision.
+Later, if an implementation differs from the RFC in substantial ways, the +subteam should be alerted, and may ask for an explicit amendment RFC. Otherwise, +the changes should just be explained in the commit/PR.
+With all of that out of the way, what subteams should we start with? This RFC +proposes the following initial set:
+In the long run, we will likely also want teams for documentation and for +community events, but these can be spun up once there is a more clear need (and +available resources).
+Focuses on the design of language-level features; not all team members need to +have extensive implementation experience.
+Some example RFCs that fall into this area:
+Oversees both std
and, ultimately, other crates in the rust-lang
github
+organization. The focus up to this point has been the standard library, but we
+will want “official” libraries that aren’t quite std
territory but are still
+vital for Rust. (The precise plan here, as well as the long-term plan for std
,
+is one of the first important areas of debate for the subteam.) Also includes
+API conventions.
Some example RFCs that fall into this area:
+Focuses on compiler internals, including implementation of language +features. This broad category includes work in codegen, factoring of compiler +data structures, type inference, borrowck, and so on.
+There is a more limited set of example RFCs for this subteam, in part because we +haven’t generally required RFCs for this kind of internals work, but here are two:
+Even more broad is the “tooling” subteam, which at inception is planned to
+encompass every “official” (rust-lang managed) non-rustc
tool:
It’s not presently clear exactly what tools will end up under this umbrella, nor +which should be prioritized.
+Finally, the moderation team is responsible for dealing with CoC violations.
+One key difference from the other subteams is that the moderation team does not +have a leader. Its members are chosen directly by the core team, and should be +community members who have demonstrated the highest standard of discourse and +maturity. To limit conflicts of interest, the moderation subteam should not +include any core team members. However, the subteam is free to consult with +the core team as it deems appropriate.
+The moderation team will have a public email address that can be used to raise +complaints about CoC violations (forwards to all active moderators).
+What follows is an initial proposal for the mechanics of moderation. The +moderation subteam may choose to revise this proposal by drafting an RFC, which +will be approved by the core team.
+Moderation begins whenever a moderator becomes aware of a CoC problem, either +through a complaint or by observing it directly. In general, the enforcement +steps are as follows:
+++These steps are adapted from text written by Manish Goregaokar, who helped +articulate them from experience as a Stack Exchange moderator.
+
Except for extreme cases (see below), try first to address the problem with a +light public comment on thread, aimed to de-escalate the situation. These +comments should strive for as much empathy as possible. Moderators should +emphasize that dissenting opinions are valued, and strive to ensure that the +technical points are heard even as they work to cool things down.
+When a discussion has just gotten a bit heated, the comment can just be a +reminder to be respectful and that there is rarely a clear “right” answer. In +cases that are more clearly over the line into personal attacks, it can +directly call out a problematic comment.
+If the problem persists on thread, or if a particular person repeatedly comes +close to or steps over the line of a CoC violation, moderators then email the +offender privately. The message should include relevant portions of the CoC +together with the offending comments. Again, the goal is to de-escalate, and +the email should be written in a dispassionate and empathetic way. However, +the message should also make clear that continued violations may result in a +ban.
+If problems still persist, the moderators can ban the offender. Banning should +occur for progressively longer periods, for example starting at 1 day, then 1 +week, then permanent. The moderation subteam will determine the precise +guidelines here.
+In general, moderators can and should unilaterally take the first step, but +steps beyond that (particularly banning) should be done via consensus with the +other moderators. Permanent bans require core team approval.
+Some situations call for more immediate, drastic measures: deeply inappropriate +comments, harassment, or comments that make people feel unsafe. (See the +code of conduct for some more details +about this kind of comment). In these cases, an individual moderator is free to +take immediate, unilateral steps including redacting or removing comments, or +instituting a short-term ban until the subteam can convene to deal with the +situation.
+The moderation team is responsible for interpreting the CoC. Drastic measures +like bans should only be used in cases of clear, repeated violations.
+Moderators themselves are held to a very high standard of behavior, and should +strive for professional and impersonal interactions when dealing with a CoC +violation. They should always push to de-escalate. And they should recuse +themselves from moderation in threads where they are actively participating in +the technical debate or otherwise have a conflict of interest. Moderators who +fail to keep up this standard, or who abuse the moderation process, may be +removed by the core team.
+Subteam, and especially core team members are also held to a high standard of +behavior. Part of the reason to separate the moderation subteam is to ensure +that CoC violations by Rust’s leadership be addressed through the same +independent body of moderators.
+Moderation covers all rust-lang venues, which currently include github +repos, IRC channels (#rust, #rust-internals, #rustc, #rust-libs), and +the two discourse forums. (The subreddit already has its own +moderation structure, and isn’t directly associated with the rust-lang +organization.)
+ +Welcome to the Rust Forge! Rust Forge serves as a repository of supplementary +documentation useful for members of The Rust Programming Language. If +you find any mistakes, typos, or want to add to the Rust Forge, feel free to +file an issue or PR on the Rust Forge GitHub.
+Want to contribute to Rust, but don’t know where to start? Here’s a list of
+rust-lang
projects that have marked issues that need help and issues that are
+good first issues.
Repository | Description |
---|---|
rust | The Rust Language & Compiler |
cargo | The Rust package manager |
crates.io | Source code for crates.io |
www.rust-lang.org | The Rust website |
Channel | Version | Will be stable on | Will branch from master on |
---|---|---|---|
Stable | |||
Beta | |||
Nightly | |||
Nightly +1 |
See the release process documentation for details on +what happens in the days leading up to a release.
+To ensure the beta release includes all the tools, no tool breakages are +allowed in the week before the beta cutoff (except for nightly-only tools).
+Beta Cut | No Breakage Week |
---|---|