The Rust Security Response WG was notified that the Rust standard library did not properly escape arguments when invoking batch files (with the bat
andcmd
extensions) on Windows using the Command API. An attacker able to control the arguments passed to the spawned process could execute arbitrary shell commands by bypassing the escaping.
The severity of this vulnerability is critical if you are invoking batch files on Windows with untrusted arguments. No other platform or use is affected.
This vulnerability is identified by CVE-2024-24576.
The Command::arg and Command::args APIs state in their documentation that the arguments will be passed to the spawned process as-is, regardless of the content of the arguments, and will not be evaluated by a shell. This means it should be safe to pass untrusted input as an argument.
On Windows, the implementation of this is more complex than other platforms, because the Windows API only provides a single string containing all the arguments to the spawned process, and it's up to the spawned process to split them. Most programs use the standard C run-time argv, which in practice results in a mostly consistent way arguments are splitted.
One exception though is cmd.exe
(used among other things to execute batch files), which has its own argument splitting logic. That forces the standard library to implement custom escaping for arguments passed to batch files. Unfortunately it was reported that our escaping logic was not thorough enough, and it was possible to pass malicious arguments that would result in arbitrary shell execution.
Due to the complexity of cmd.exe
, we didn't identify a solution that would correctly escape arguments in all cases. To maintain our API guarantees, we improved the robustness of the escaping code, and changed the Command
API to return an InvalidInput error when it cannot safely escape an argument. This error will be emitted when spawning the process.
The fix will be included in Rust 1.77.2, to be released later today.
If you implement the escaping yourself or only handle trusted inputs, on Windows you can also use the CommandExt::raw_arg method to bypass the standard library's escaping logic.
All Rust versions before 1.77.2 on Windows are affected, if your code or one of your dependencies executes batch files with untrusted arguments. Other platforms or other uses on Windows are not affected.
We want to thank RyotaK for responsibly disclosing this to us according to theRust security policy, and Simon Sawicki (Grub4K) for identifying some of the escaping rules we adopted in our fix.
We also want to thank the members of the Rust project who helped us disclose the vulnerability: Chris Denton for developing the fix; Mara Bos for reviewing the fix; Pietro Albini for writing this advisory; Pietro Albini, Manish Goregaokar and Josh Stone for coordinating this disclosure; Amanieu d'Antras for advising during the disclosure.
The Rust team has published a new point release of Rust, 1.77.2. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.77.2 is as easy as:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website.
This release includes a fix for CVE-2024-24576.
Before this release, the Rust standard library did not properly escape arguments when invoking batch files (with the bat
and cmd
extensions) on Windows using the Command API. An attacker able to control the arguments passed to the spawned process could execute arbitrary shell commands by bypassing the escaping.
This vulnerability is CRITICAL if you are invoking batch files on Windows with untrusted arguments. No other platform or use is affected.
You can learn more about the vulnerability in the dedicated advisory.
Many people came together to create Rust 1.77.2. We couldn't have done it without all of you. Thanks!
WASI 0.2 was recently stabilized, and Rust has begun implementing first-class support for it in the form of a dedicated new target. Rust 1.78 will introduce new wasm32-wasip1
(tier 2) and wasm32-wasip2
(tier 3) targets. wasm32-wasip1
is an effective rename of the existing wasm32-wasi
target, freeing the target name up for an eventual WASI 1.0 release. Starting Rust 1.78 (May 2nd, 2024), users of WASI 0.1 are encouraged to begin migrating to the new wasm32-wasip1
target before the existing wasm32-wasi
target is removed in Rust 1.84 (January 5th, 2025).
In this post we'll discuss the introduction of the new targets, the motivation behind it, what that means for the existing WASI targets, and a detailed schedule for these changes. This post is about the WASI targets only; the existing wasm32-unknown-unknown
and wasm32-unknown-emscripten
targets are unaffected by any changes in this post.
wasm32-wasip2
After nearly five years of work the WASI 0.2 specificationwas recently stabilized. This work builds on WebAssembly Components (think: strongly-typed ABI for Wasm), providing standard interfaces for things like asynchronous IO, networking, and HTTP. This will finally make it possible to write asynchronous networked services on top of WASI, something which wasn't possible using WASI 0.1.
People interested in compiling Rust code to WASI 0.2 today are able to do so using the cargo-componenttool. This tool is able to take WASI 0.1 binaries, and transform them to WASI 0.2 Components using a shim. It also provides native support for common cargo commands such as cargo build
, cargo test
, and cargo run
. While it introduces some inefficiencies because of the additional translation layer, in practice this already works really well and people should be enough able to get started with WASI 0.2 development.
We're however keen to begin making that translation layer obsolete. And for that reason we're happy to share that Rust has made its first steps towards that with the introduction of the tier 3 wasm32-wasip2
target landing in Rust 1.78. This will initially miss a lot of expected features such as stdlib support, and we don't recommend people use this target quite yet. But as we fill in those missing features over the coming months, we aim to eventually hit meet the criteria to become a tier 2 target, at which point the wasm32-wasip2
target would be considered ready for general use. This work will happen through 2024, and we expect for this to land before the end of the calendar year.
wasm32-wasi
to wasm32-wasip1
The original name for what we now call WASI 0.1 was "WebAssembly System Interface, snapshot 1". Rust shipped support for this in 2019, and we did so knowing the target would likely undergo significant changes in the future. With the knowledge we have today though, we would not have chosen to introduce the "WASI, snapshot 1" target as wasm32-wasi
. We should have instead chosen to add some suffix to the initial target triple so that the eventual stable WASI 1.0 target can just be called wasm32-wasi
.
In anticipation of both an eventual WASI 1.0 target, and to preserve consistency between target names, we'll begin rolling out a name change to the existing WASI 0.1 target. Starting in Rust 1.78 (May 2nd, 2024) a new wasm32-wasip1
target will become available. Starting Rust 1.81 (September 5th, 2024) we will begin warning existing users of wasm32-wasi
to migrate to wasm32-wasip1
. And finally in Rust 1.84 (January 9th, 2025) the wasm32-wasi
target will no longer be shipped on the stable release channel. This will provide an 8 month transition period for projects to switch to the new target name when they update their Rust toolchains.
The name wasip1
can be read as either "WASI (zero) point one" or "WASI preview one". The official specification uses the "preview" moniker, however in most communication the form "WASI 0.1" is now preferred. This target triple was chosen because it not only maps to both terms, but also more closely resembles the target terminology used in other programming languages. This is something the WASI Preview 2 specification also makes note of.
This table provides the dates and cut-offs for the target rename fromwasm32-wasi
to wasm32-wasip1
. The dates in this table do not apply to the newly-introduced wasm32-wasi-preview1-threads
target; this will be renamed towasm32-wasip1-threads
in Rust 1.78 without going through a transition period. The tier 3 wasm32-wasip2
target will also be made available in Rust 1.78.
date
Rust Stable
Rust Beta
Rust Nightly
Notes
2024-02-08
1.76
1.77
1.78
wasm32-wasip1
available on nightly
2024-03-21
1.77
1.78
1.79
wasm32-wasip1
available on beta
2024-05-02
1.78
1.79
1.80
wasm32-wasip1
available on stable
2024-06-13
1.79
1.80
1.81
warn if wasm32-wasi
is used on nightly
2024-07-25
1.80
1.81
1.82
warn if wasm32-wasi
is used on beta
2024-09-05
1.81
1.82
1.83
warn if wasm32-wasi
is used on stable
2024-10-17
1.82
1.83
1.84
wasm32-wasi
unavailable on nightly
2024-11-28
1.83
1.84
1.85
wasm32-wasi
unavailable on beta
2025-01-09
1.84
1.85
1.86
wasm32-wasi
unavailable on stable
In this post we've discussed the upcoming updates to Rust's WASI targets. Come Rust 1.78 the wasm32-wasip1
(tier 2) and wasm32-wasip2
(tier 3) targets will be added. In Rust 1.81 we will begin warning if wasm32-wasi
is being used. And in Rust 1.84, the existing wasm32-wasi
target will be removed. This will free up wasm32-wasi
to eventually be used for a WASI 1.0 target. Users will have 8 months to switch to the new target name when they update their Rust toolchains.
The wasm32-wasip2
target marks the start of native support for WASI 0.2. In order to target it today from Rust, people are encouraged to usecargo-component tool instead. The plan is to eventually graduate wasm32-wasip2
to a tier-2 target, at which point cargo-component
will be upgraded to support it natively instead.
With WASI 0.2 finally stable, it's an exciting time for WebAssembly development. We're happy for Rust to begin implementing native support for WASI 0.2, and we're excited for what this will enable people to build.
Rust has long had an inconsistency with C regarding the alignment of 128-bit integers on the x86-32 and x86-64 architectures. This problem has recently been resolved, but the fix comes with some effects that are worth being aware of.
As a user, you most likely do not need to worry about these changes unless you are:
i128
/u128
rather than using align_of
improper_ctypes*
lints and using these types in FFIThere are also no changes to architectures other than x86-32 and x86-64. If your code makes heavy use of 128-bit integers, you may notice runtime performance increases at a possible cost of additional memory use.
This post documents what the problem was, what changed to fix it, and what to expect with the changes. If you are already familiar with the problem and only looking for a compatibility matrix, jump to the Compatibility section.
Data types have two intrinsic values that relate to how they can be arranged in memory; size and alignment. A type's size is the amount of space it takes up in memory, and its alignment specifies which addresses it is allowed to be placed at.
The size of simple types like primitives is usually unambiguous, being the exact size of the data they represent with no padding (unused space). For example, an i64
always has a size of 64 bits or 8 bytes.
Alignment, however, can vary. An 8-byte integer could be stored at any memory address (1-byte aligned), but most 64-bit computers will get the best performance if it is instead stored at a multiple of 8 (8-byte aligned). So, like in other languages, primitives in Rust have this most efficient alignment by default. The effects of this can be seen when creating composite types (playground link):
use core::mem::{align_of, offset_of};
#[repr(C)]
struct Foo {
a: u8, // 1-byte aligned
b: u16, // 2-byte aligned
}
#[repr(C)]
struct Bar {
a: u8, // 1-byte aligned
b: u64, // 8-byte aligned
}
println!("Offset of b (u16) in Foo: {}", offset_of!(Foo, b));
println!("Alignment of Foo: {}", align_of::<Foo>());
println!("Offset of b (u64) in Bar: {}", offset_of!(Bar, b));
println!("Alignment of Bar: {}", align_of::<Bar>());
Output:
Offset of b (u16) in Foo: 2
Alignment of Foo: 2
Offset of b (u64) in Bar: 8
Alignment of Bar: 8
We see that within a struct, a type will always be placed such that its offset is a multiple of its alignment - even if this means unused space (Rust minimizes this by default when repr(C)
is not used).
These numbers are not arbitrary; the application binary interface (ABI) says what they should be. In the x86-64 psABI (processor-specific ABI) for System V (Unix & Linux),Figure 3.1: Scalar Types tells us exactly how primitives should be represented:
C type
Rust equivalent
sizeof
Alignment (bytes)
char
i8
1
1
unsigned char
u8
1
1
short
i16
2
2
unsigned short
u16
2
2
long
i64
8
8
unsigned long
u64
8
8
The ABI only specifies C types, but Rust follows the same definitions both for compatibility and for the performance benefits.
If two implementations disagree on the alignment of a data type, they cannot reliably share data containing that type. Rust had inconsistent alignment for 128-bit types:
println!("alignment of i128: {}", align_of::<i128>());
// rustc 1.76.0
alignment of i128: 8
printf("alignment of __int128: %zu\n", _Alignof(__int128));
// gcc 13.2
alignment of __int128: 16
// clang 17.0.1
alignment of __int128: 16
(Godbolt link) Looking back at the psABI, we can see that Rust has the wrong alignment here:
C type
Rust equivalent
sizeof
Alignment (bytes)
__int128
i128
16
16
unsigned __int128
u128
16
16
It turns out this isn't because of something that Rust is actively doing incorrectly: layout of primitives comes from the LLVM codegen backend used by both Rust and Clang, among other languages, and it has the alignment for i128
hardcoded to 8 bytes.
Clang uses the correct alignment only because of a workaround, where the alignment is manually set to 16 bytes before handing the type to LLVM. This fixes the layout issue but has been the source of some other minor problems.2Rust does no such manual adjustement, hence the issue reported athttps://github.com/rust-lang/rust/issues/54341.
There is an additional problem: LLVM does not always do the correct thing when passing 128-bit integers as function arguments. This was a known issue in LLVM, before itsrelevance to Rust was discovered.
When calling a function, the arguments get passed in registers (special storage locations within the CPU) until there are no more slots, then they get "spilled" to the stack (the program's memory). The ABI tells us what to do here as well, in the section 3.2.3 Parameter Passing:
Arguments of type
__int128
offer the same operations as INTEGERs, yet they do not fit into one general purpose register but require two registers. For classification purposes__int128
is treated as if it were implemented as:typedef struct { long low, high; } __int128;
with the exception that arguments of type
__int128
that are stored in memory must be aligned on a 16-byte boundary.
We can try this out by implementing the calling convention manually. In the below C example, inline assembly is used to call foo(0xaf, val, val, val)
with val
as0x0x11223344556677889900aabbccddeeff
.
x86-64 uses the registers rdi
, rsi
, rdx
, rcx
, r8
, and r9
to pass function arguments, in that order (you guessed it, this is also in the ABI). Each register fits a word (64 bits), and anything that doesn't fit gets push
ed to the stack.
/* full example at <https://godbolt.org/z/5c8cb5cxs> */
/* to see the issue, we need a padding value to "mess up" argument alignment */
void foo(char pad, __int128 a, __int128 b, __int128 c) {
printf("%#x\n", pad & 0xff);
print_i128(a);
print_i128(b);
print_i128(c);
}
int main() {
asm(
/* load arguments that fit in registers */
"movl $0xaf, %edi \n\t" /* 1st slot (edi): padding char (`edi` is the
* same as `rdi`, just a smaller access size) */
"movq $0x9900aabbccddeeff, %rsi \n\t" /* 2rd slot (rsi): lower half of `a` */
"movq $0x1122334455667788, %rdx \n\t" /* 3nd slot (rdx): upper half of `a` */
"movq $0x9900aabbccddeeff, %rcx \n\t" /* 4th slot (rcx): lower half of `b` */
"movq $0x1122334455667788, %r8 \n\t" /* 5th slot (r8): upper half of `b` */
"movq $0xdeadbeef4c0ffee0, %r9 \n\t" /* 6th slot (r9): should be unused, but
* let's trick clang! */
/* reuse our stored registers to load the stack */
"pushq %rdx \n\t" /* upper half of `c` gets passed on the stack */
"pushq %rsi \n\t" /* lower half of `c` gets passed on the stack */
"call foo \n\t" /* call the function */
"addq $16, %rsp \n\t" /* reset the stack */
);
}
Running the above with GCC prints the following expected output:
0xaf
0x11223344556677889900aabbccddeeff
0x11223344556677889900aabbccddeeff
0x11223344556677889900aabbccddeeff
But running with Clang 17 prints:
0xaf
0x11223344556677889900aabbccddeeff
0x11223344556677889900aabbccddeeff
0x9900aabbccddeeffdeadbeef4c0ffee0
//^^^^^^^^^^^^^^^^ this should be the lower half
// ^^^^^^^^^^^^^^^^ look familiar?
Surprise!
This illustrates the second problem: LLVM expects an i128
to be passed half in a register and half on the stack when possible, but this is not allowed by the ABI.
Since the behavior comes from LLVM and has no reasonable workaround, this is a problem in both Clang and Rust.
Getting these problems resolved was a lengthy effort by many people, starting with a patch by compiler team member Simonas Kazlauskas in 2017: D28990. Unfortunately, this wound up reverted. It was later attempted again in D86310 by LLVM contributor Harald van Dijk, which is the version that finally landed in October 2023.
Around the same time, Nikita Popov fixed the calling convention issue with D158169. Both of these changes made it into LLVM 18, meaning all relevant ABI issues will be resolved in both Clang and Rust that use this version (Clang 18 and Rust 1.78 when using the bundled LLVM).
However, rustc
can also use the version of LLVM installed on the system rather than a bundled version, which may be older. To mitigate the chance of problems from differing alignment with the same rustc
version, a proposal was introduced to manually correct the alignment like Clang has been doing. This was implemented by Matthew Maurer in #11672.
Since these changes, Rust now produces the correct alignment:
println!("alignment of i128: {}", align_of::<i128>());
// rustc 1.77.0
alignment of i128: 16
As mentioned above, part of the reason for an ABI to specify the alignment of a datatype is because it is more efficient on that architecture. We actually got to see that firsthand: the initial performance run with the manual alignment change showed nontrivial improvements to compiler performance (which relies heavily on 128-bit integers to work with integer literals). The downside of increasing alignment is that composite types do not always fit together as nicely in memory, leading to an increase in usage. Unfortunately this meant some of the performance wins needed to be sacrificed to avoid an increased memory footprint.
The most imporant question is how compatibility changed as a result of these fixes. In short, i128
and u128
with Rust using LLVM 18 (the default version starting with 1.78) will be completely compatible with any version of GCC, as well as Clang 18 and above (released March 2024). All other combinations have some incompatible cases, which are summarized in the table below:
Compiler 1
Compiler 2
status
Rust ≥ 1.78 with bundled LLVM (18)
GCC (any version)
Fully compatible
Rust ≥ 1.78 with bundled LLVM (18)
Clang ≥ 18
Fully compatible
Rust ≥ 1.77 with LLVM ≥ 18
GCC (any version)
Fully compatible
Rust ≥ 1.77 with LLVM ≥ 18
Clang ≥ 18
Fully compatible
Rust ≥ 1.77 with LLVM ≥ 18
Clang < 18
Storage compatible, has calling bug
Rust ≥ 1.77 with LLVM < 18
GCC (any version)
Storage compatible, has calling bug
Rust ≥ 1.77 with LLVM < 18
Clang (any version)
Storage compatible, has calling bug
Rust < 1.773
GCC (any version)
Incompatible
Rust < 1.773
Clang (any version)
Incompatible
GCC (any version)
Clang ≥ 18
Fully compatible
GCC (any version)
Clang < 18
Storage compatible with calling bug
As mentioned in the introduction, most users will notice no effects of this change unless you are already doing something questionable with these types.
Starting with Rust 1.77, it will be reasonably safe to start experimenting with 128-bit integers in FFI, with some more certainty coming with the LLVM update in 1.78. There is ongoing discussion about lifting the lint in an upcoming version, but we want to be cautious and avoid introducing silent breakage for users whose Rust compiler may be built with an older LLVM.
The Rust team has published a new point release of Rust, 1.77.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.77.1 is as easy as:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website.
Cargo enabled stripping of debuginfo in release builds by defaultin Rust 1.77.0. However, due to a pre-existing issue, debuginfo stripping does not behave in the expected way on Windows with the MSVC toolchain.
Rust 1.77.1 therefore disables the new Cargo behavior on Windows for targets that use MSVC. There are no changes for other targets. We plan to eventually re-enable debuginfo stripping in release mode in a later Rust release.
Many people came together to create Rust 1.77.1. We couldn't have done it without all of you. Thanks!
The Rust team is happy to announce a new version of Rust, 1.77.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup
, you can get 1.77.0 with:
$ rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.77.0.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Please report any bugs you might come across!
This release is relatively minor, but as always, even incremental improvements lead to a greater whole. A few of those changes are highlighted in this post, and others may yet fill more niche needs.
Rust now supports C-string literals (c"abc"
) which expand to a nul-byte terminated string in memory of type &'static CStr
. This makes it easier to write code interoperating with foreign language interfaces which require nul-terminated strings, with all of the relevant error checking (e.g., lack of interior nul byte) performed at compile time.
async fn
Async functions previously could not call themselves due to a compiler limitation. In 1.77, that limitation has been lifted, so recursive calls are permitted so long as they use some form of indirection to avoid an infinite size for the state of the function.
This means that code like this now works:
async fn fib(n: u32) -> u32 {
match n {
0 | 1 => 1,
_ => Box::pin(fib(n-1)).await + Box::pin(fib(n-2)).await
}
}
offset_of!
1.77.0 stabilizes offset_of! for struct fields, which provides access to the byte offset of the relevant public field of a struct. This macro is most useful when the offset of a field is required without an existing instance of a type. Implementing such a macro is already possible on stable, but without an instance of the type the implementation would require tricky unsafe code which makes it easy to accidentally introduce undefined behavior.
Users can now access the offset of a public field with offset_of!(StructName, field)
. This expands to a usize
expression with the offset in bytes from the start of the struct.
Cargo profileswhich do not enable debuginfo in outputs (e.g., debug = 0
) will enable strip = "debuginfo"
by default.
This is primarily needed because the (precompiled) standard library ships with debuginfo, which means that statically linked results would include the debuginfo from the standard library even if the local compilations didn't explicitly request debuginfo.
Users which do want debuginfo can explicitly enable it with thedebugflag in the relevant Cargo profile.
incompatible_msrv
lintThe Rust project only supports the latest stable release of Rust. Some libraries aim to have an older minimum supported Rust version (MSRV), typically verifying this support by compiling in CI with an older release. However, when developing new code, it's convenient to use latest documentation and the latest toolchain with fixed bugs, performance improvements, and other improvements. This can make it easy to accidentally start using an API that's only available on newer versions of Rust.
Clippy has added a new lint, incompatible_msrv, which will inform users if functionality being referenced is only available on newer versions than theirdeclared MSRV.
Check out everything that changed in Rust, Cargo, and Clippy.
Many people came together to create Rust 1.77.0. We couldn't have done it without all of you. Thanks!
The rustup team is happy to announce the release of rustup version 1.27.0.Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of rustup installed, getting rustup 1.27.0 is as easy as stopping any programs which may be using Rustup (e.g. closing your IDE) and running:
$ rustup self update
Rustup will also automatically update itself at the end of a normal toolchain update:
$ rustup update
If you don't have it already, you can get rustup from the appropriate page on our website.
This long-awaited Rustup release has gathered all the new features and fixes since April 2023. These changes include improvements in Rustup's maintainability, user experience, compatibility and documentation quality.
Also, it's worth mentioning that Dirkjan Ochtman (djc) and rami3l (rami3l) have joined the team and are coordinating this new release.
At the same time, we have granted Daniel Silverstone (kinnison) and 二手掉包工程师 (hi-rustin) their well-deserved alumni status in this release cycle. Kudos for your contributions over the years and your continuous guidance on maintaining the project!
The headlines for this release are:
fish
, PATH configs for your Rustup installation will be added automatically from now on.loongarch64-unknown-linux-gnu
as a host platform has been added. This means you should be able to install Rustup via rustup.rs and no longer have to rely on loongnix.cn or self-compiled installations.loongarch64-unknown-linux-gnu
is a "tier 2 platform with host tools", so Rustup is guaranteed to build for this platform. According to Rust's target tier policy, this does not imply that these builds are also guaranteed to work, but they often work to quite a good degree and patches are always welcome!Full details are available in the changelog!
Rustup's documentation is also available in the rustup book.
Thanks again to all the contributors who made rustup 1.27.0 possible!
Like the rest of the Rust community, crates.io has been growing rapidly, with download and package counts increasing 2-3x year-on-year. This growth doesn't come without problems, and we have made some changes to download handling on crates.io to ensure we can keep providing crates for a long time to come.
This growth has brought with it some challenges. The most significant of these is that all download requests currently go through the crates.io API, occasionally causing scaling issues. If the API is down or slow, it affects all download requests too. In fact, the number one cause of waking up our crates.io on-call team is "slow downloads" due to the API having performance issues.
Additionally, this setup is also problematic for users outside of North America, where download requests are slow due to the distance to the crates.io API servers.
To address these issues, over the last year we have decided to make some changes:
Starting from 2024-03-12, cargo
will begin to download crates directly from our static.crates.io CDN servers.
This change will be facilitated by modifying the config.json file on the package index. In other words: no changes to cargo
or your own system are needed for the changes to take effect. The config.json
file is used by cargo
to determine the download URLs for crates, and we will update it to point directly to the CDN servers, instead of the crates.io API.
Over the past few months, we have made several changes to the crates.io backend to enable this:
The latter change has caused the download numbers of most crates to increase, as some download requests were not counted before. Specifically, crates.io mirrors were often downloading directly from the CDN servers already, and those downloads had previously not been counted. For crates with a lot of downloads these changes will be barely noticeable, but for smaller crates, the download numbers have increased quite a bit over the past few weeks since we enabled this change.
We expect these changes to significantly improve the reliability and speed of downloads, as the performance of the crates.io API servers will no longer affect the download requests. Over the next few weeks, we will monitor the performance of the system to ensure that the changes have the expected effects.
We have noticed that some non-cargo build systems are not using the config.json
file of the index to build the download URLs. We will reach out to the maintainers of those build systems to ensure that they are aware of the change and to help them update their systems to use the new download URLs. The old download URLs will continue to work, but these systems will be missing out on the potential performance improvement.
We are excited about these changes and believe they will greatly improve the reliability of crates.io. We look forward to hearing your feedback!
Since Clippy v0.0.97 and before it was shipped with rustup
, Clippy implicitly added a feature = "cargo-clippy"
config1 when linting your code with cargo clippy
.
Back in the day (2016) this was necessary to allow, warn or deny Clippy lints using attributes:
#[cfg_attr(feature = "cargo-clippy", allow(clippy_lint_name))]
Doing this hasn't been necessary for a long time. Today, Clippy users will set lint levels with tool lint attributes using the clippy::
prefix:
#[allow(clippy::lint_name)]
The implicit feature = "cargo-clippy"
has only been kept for backwards compatibility, but will now be deprecated.
As there is a rare use case for conditional compilation depending on Clippy, we will provide an alternative. So in the future you will be able to use:
#[cfg(clippy)]
Should you have instances of feature = "cargo-clippy"
in your code base, you will see a warning from the new Clippy lintclippy::deprecated_clippy_cfg_attr. This lint can automatically fix your code. So if you should see this lint triggering, just run:
cargo clippy --fix -- -Aclippy::all -Wclippy::deprecated_clippy_cfg_attr
This will fix all instances in your code.
In addition, check your .cargo/config
file for:
[target.'cfg(feature = "cargo-clippy")']
rustflags = ["-Aclippy::..."]
If you have this config, you will have to update it yourself, by either changing it to cfg(clippy)
or taking this opportunity to transition to setting lint levels in Cargo.toml directly.
Currently, there's a call for testing, in order to stabilize checking conditional compilation at compile time, aka cargo check -Zcheck-cfg
. If we were to keep the feature = "cargo-clippy"
config, users would start seeing a lot of warnings on their feature = "cargo-clippy"
conditions. To work around this, they would either need to allow the lint or have to add a dummy feature to their Cargo.toml
in order to silence those warnings:
[features]
cargo-clippy = []
We didn't think this would be user friendly, and decided that instead we want to deprecate the implicit feature = "cargo-clippy"
config and replace it with theclippy
config.
The minimum requirements for Tier 1 toolchains targeting Windows will increase with the 1.78 release (scheduled for May 02, 2024). Windows 10 will now be the minimum supported version for the *-pc-windows-*
targets. These requirements apply both to the Rust toolchain itself and to binaries produced by Rust.
Two new targets have been added with Windows 7 as their baseline: x86_64-win7-windows-msvc
and i686-win7-windows-msvc
. They are starting as Tier 3 targets, meaning that the Rust codebase has support for them but we don't build or test them automatically. Once these targets reach Tier 2 status, they will be available to use via rustup.
x86_64-pc-windows-msvc
i686-pc-windows-msvc
x86_64-pc-windows-gnu
i686-pc-windows-gnu
x86_64-pc-windows-gnullvm
i686-pc-windows-gnullvm
Prior to now, Rust had Tier 1 support for Windows 7, 8, and 8.1 but these targets no longer meet our requirements. In particular, these targets could no longer be tested in CI which is required by the Target Tier Policy and are not supported by their vendor.
We're writing this blog post to announce that the Rust Project will be participating in Google Summer of Code (GSoC) 2024. If you're not eligible or interested in participating in GSoC, then most of this post likely isn't relevant to you; if you are, this should contain some useful information and links.
Google Summer of Code (GSoC) is an annual global program organized by Google that aims to bring new contributors to the world of open-source. The program pairs organizations (such as the Rust Project) with contributors (usually students), with the goal of helping the participants make meaningful open-source contributions under the guidance of experienced mentors.
As of today, the organizations that have been accepted into the program have been announced by Google. The GSoC applicants now have several weeks to send project proposals to organizations that appeal to them. If their project proposal is accepted, they will embark on a 12-week journey during which they will try to complete their proposed project under the guidance of an assigned mentor.
We have prepared a list of project ideas that can serve as inspiration for potential GSoC contributors that would like to send a project proposal to the Rust organization. However, applicants can also come up with their own project ideas. You can discuss project ideas or try to find mentors in the #gsoc Zulip stream. We have also prepared a proposal guide that should help you with preparing your project proposals.
You can start discussing the project ideas with Rust Project maintainers immediately. The project proposal application period starts on March 18, 2024, and ends on April 2, 2024 at 18:00 UTC. Take note of that deadline, as there will be no extensions!
If you are interested in contributing to the Rust Project, we encourage you to check out our project idea list and send us a GSoC project proposal! Of course, you are also free to discuss these projects and/or try to move them forward even if you do not intend to (or cannot) participate in GSoC. We welcome all contributors to Rust, as there is always enough work to do.
This is the first time that the Rust Project is participating in GSoC, so we are quite excited about it. We hope that participants in the program can improve their skills, but also would love for this to bring new contributors to the Project and increase the awareness of Rust in general. We will publish another blog post later this year with more information about our participation in the program.
Hello, Rustaceans!
The Rust Survey Team is excited to share the results of our 2023 survey on the Rust Programming language, conducted between December 18, 2023 and January 15, 2024. As in previous years, the 2023 State of Rust Survey was focused on gathering insights and feedback from Rust users, and all those who are interested in the future of Rust more generally.
This eighth edition of the survey surfaced new insights and learning opportunities straight from the global Rust language community, which we will summarize below. In addition to this blog post, this year we have also prepared a report containing charts with aggregated results of all questions in the survey. Based on feedback from recent years, we have also tried to provide more comprehensive and interactive charts in this summary blog post. Let us know what you think!
Our sincerest thanks to every community member who took the time to express their opinions and experiences with Rust over the past year. Your participation will help us make Rust better for everyone.
There's a lot of data to go through, so strap in and enjoy!
Survey
Started
Completed
Completion rate
Views
2022
11 482
9 433
81.3%
25 581
2023
11 950
9 710
82.2%
16 028
As shown above, in 2023, we have received 37% fewer survey views in vs 2022, but saw a slight uptick in starts and completions. There are many reasons why this could have been the case, but it’s possible that because we released the 2022 analysis blog so late last year, the survey was fresh in many Rustaceans’ minds. This might have prompted fewer people to feel the need to open the most recent survey. Therefore, we find it doubly impressive that there were more starts and completions in 2023, despite the lower overall view count.
This year, we have relied on automated translations of the survey, and we have asked volunteers to review them. We thank the hardworking volunteers who reviewed these automated survey translations, ultimately allowing us to offer the survey in seven languages: English, Simplified Chinese, French, German, Japanese, Russian, and Spanish. We decided not to publish the survey in languages without a translation review volunteer, meaning we could not issue the survey in Portuguese, Ukrainian, Traditional Chinese, or Korean.
The Rust Survey team understands that there were some issues with several of these translated versions, and we apologize for any difficulty this has caused. We are always looking for ways to improve going forward and are in the process of discussing improvements to this part of the survey creation process for next year.
We saw a 3pp increase in respondents taking this year’s survey in English – 80% in 2023 and 77% in 2022. Across all other languages, we saw only minor variations – all of which are likely due to us offering fewer languages overall this year due to having fewer volunteers.
Rust user respondents were asked which country they live in. The top 10 countries represented were, in order: United States (22%), Germany (12%), China (6%), United Kingdom (6%), France (6%), Canada (3%), Russia (3%), Netherlands (3%), Japan (3%), and Poland (3%) . We were interested to see a small reduction in participants taking the survey in the United States in 2023 (down 3pp from the 2022 edition) which is a positive indication of the growing global nature of our community! You can try to find your country in the chart below:
Once again, the majority of our respondents reported being most comfortable communicating on technical topics in English at 92.7% — a slight difference from 93% in 2022. Again, Chinese was the second-highest choice for preferred language for technical communication at 6.1% (7% in 2022).
We also asked whether respondents consider themselves members of a marginalized community. Out of those who answered, 76% selected no, 14% selected yes, and 10% preferred not to say.
We have asked the group that selected “yes” which specific groups they identified as being a member of. The majority of those who consider themselves a member of an underrepresented or marginalized group in technology identify as lesbian, gay, bisexual, or otherwise non-heterosexual. The second most selected option was neurodivergent at 41% followed by trans at 31.4%. Going forward, it will be important for us to track these figures over time to learn how our community changes and to identify the gaps we need to fill.
As Rust continues to grow, we must acknowledge the diversity, equity, and inclusivity (DEI)-related gaps that exist in the Rust community. Sadly, Rust is not unique in this regard. For instance, only 20% of 2023 respondents to this representation question consider themselves a member of a racial or ethnic minority and only 26% identify as a woman. We would like to see more equitable figures in these and other categories. In 2023, the Rust Foundation formed a diversity, equity, and inclusion subcommittee on its Board of Directors whose members are aware of these results and are actively discussing ways that the Foundation might be able to better support underrepresented groups in Rust and help make our ecosystem more globally inclusive. One of the central goals of the Rust Foundation board's subcommittee is to analyze information about our community to find out what gaps exist, so this information is a helpful place to start. This topic deserves much more depth than is possible here, but readers can expect more on the subject in the future.
In 2023, we saw a slight jump in the number of respondents that self-identify as a Rust user, from 91% in 2022 to 93% in 2023.
Of those who used Rust in 2023, 49% did so on a daily (or nearly daily) basis — a small increase of 2pp from the previous year.
31% of those who did not identify as Rust users cited the perception of difficulty as the primary reason for not having used it, with 67% reporting that they simply haven’t had the chance to prioritize learning Rust yet, which was once again the most common reason.
[PNG] [SVG] [Wordcloud of open answers]
Of the former Rust users who participated in the 2023 survey, 46% cited factors outside their control (a decrease of 1pp from 2022), 31% stopped using Rust due to preferring another language (an increase of 9pp from 2022), and 24% cited difficulty as the primary reason for giving up (a decrease of 6pp from 2022).
[PNG] [SVG] [Wordcloud of open answers]
Rust expertise has generally increased amongst our respondents over the past year! 23% can write (only) simple programs in Rust (a decrease of 6pp from 2022), 28% can write production-ready code (an increase of 1pp), and 47% consider themselves productive using Rust — up from 42% in 2022. While the survey is just one tool to measure the changes in Rust expertise overall, these numbers are heartening as they represent knowledge growth for many Rustaceans returning to the survey year over year.
In terms of operating systems used by Rustaceans, the situation is very similar to the results from 2022, with Linux being the most popular choice of Rust users, followed by macOS and Windows, which have a very similar share of usage.
[PNG] [SVG] [Wordcloud of open answers]
Rust programmers target a diverse set of platforms with their Rust programs, even though the most popular target by far is still a Linux machine. We can see a slight uptick in users targeting WebAssembly, embedded and mobile platforms, which speaks to the versatility of Rust.
[PNG] [SVG] [Wordcloud of open answers]
We cannot of course forget the favourite topic of many programmers: which IDE (developer environment) do they use. Visual Studio Code still seems to be the most popular option, with RustRover (which was released last year) also gaining some traction.
[PNG] [SVG] [Wordcloud of open answers]
You can also take a look at the linked wordcloud that summarizes open answers to this question (the "Other" category), to see what other editors are also popular.
We were excited to see a continued upward year-over-year trend of Rust usage at work. 34% of 2023 survey respondents use Rust in the majority of their coding at work — an increase of 5pp from 2022. Of this group, 39% work for organizations that make non-trivial use of Rust.
Once again, the top reason employers of our survey respondents invested in Rust was the ability to build relatively correct and bug-free software at 86% — a 4pp increase from 2022 responses. The second most popular reason was Rust’s performance characteristics at 83%.
We were also pleased to see an increase in the number of people who reported that Rust helped their company achieve its goals at 79% — an increase of 7pp from 2022. 77% of respondents reported that their organization is likely to use Rust again in the future — an increase of 3pp from the previous year. Interestingly, we saw a decrease in the number of people who reported that using Rust has been challenging for their organization to use: 34% in 2023 and 39% in 2022. We also saw an increase of respondents reporting that Rust has been worth the cost of adoption: 64% in 2023 and 60% in 2022.
There are many factors playing into this, but the growing awareness around Rust has likely resulted in the proliferation of resources, allowing new teams using Rust to be better supported.
In terms of technology domains, it seems that Rust is especially popular for creating server backends, web and networking services and cloud technologies.
[PNG] [SVG] [Wordcloud of open answers]
You can scroll the chart to the right to see more domains. Note that the Database implementation and Computer Games domains were not offered as closed answers in the 2022 survey (they were merely submitted as open answers), which explains the large jump.
It is exciting to see the continued growth of professional Rust usage and the confidence so many users feel in its performance, control, security and safety, enjoyability, and more!
As always, one of the main goals of the State of Rust survey is to shed light on challenges, concerns, and priorities on Rustaceans’ minds over the past year.
Of those respondents who shared their main worries for the future of Rust (9,374), the majority were concerned about Rust becoming too complex at 43% — a 5pp increase from 2022. 42% of respondents were concerned about a low level of Rust usage in the tech industry. 32% of respondents in 2023 were most concerned about Rust developers and maintainers not being properly supported — a 6pp increase from 2022.
We saw a notable decrease in respondents who were not at all concerned about the future of Rust, 18% in 2023 and 30% in 2022.
Thank you to all participants for your candid feedback which will go a long way toward improving Rust for everyone.
[PNG] [SVG] [Wordcloud of open answers]
Closed answers marked with N/A were not present in the previous (2022) version of the survey.
In terms of features that Rust users want to be implemented, stabilized or improved, the most desired improvements are in the areas of traits (trait aliases, associated type defaults, etc.), const execution (generic const expressions, const trait methods, etc.) and async (async closures, coroutines).
[PNG] [SVG] [Wordcloud of open answers]
It is interesting that 20% of respondents answered that they wish Rust to slow down the development of new features, which likely goes hand in hand with the previously mentioned worry that Rust becomes too complex.
The areas of Rust that Rustaceans seem to struggle with the most seem to be asynchronous Rust, the traits and generics system and also the borrow checker.
[PNG] [SVG] [Wordcloud of open answers]
Respondents of the survey want the Rust maintainers to mainly prioritize fixing compiler bugs (68%), improving the runtime performance of Rust programs (57%) and also improving compile times (45%).
Same as in recent years, respondents noted that compilation time is one of the most important areas that should be improved. However, it is interesting to note that respondents also seem to consider runtime performance to be more important than compile times.
Each year, the results of the State of Rust survey help reveal the areas that need improvement in many areas across the Rust Project and ecosystem, as well as the aspects that are working well for our community.
We are aware that the survey has contained some confusing questions, and we will try to improve upon that in the next year's survey. If you have any suggestions for the Rust Annual survey, please let us know!
We are immensely grateful to those who participated in the 2023 State of Rust Survey and facilitated its creation. While there are always challenges associated with developing and maintaining a programming language, this year we were pleased to see a high level of survey participation and candid feedback that will truly help us make Rust work better for everyone.
If you’d like to dig into more details, we recommend you to browse through the full survey report.
The Rust team is happy to announce a new version of Rust, 1.76.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.76.0 with:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.76.0.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Please report any bugs you might come across!
This release is relatively minor, but as always, even incremental improvements lead to a greater whole. A few of those changes are highlighted in this post, and others may yet fill more niche needs.
A new ABI Compatibility section in the function pointer documentation describes what it means for function signatures to be ABI-compatible. A large part of that is the compatibility of argument types and return types, with a list of those that are currently considered compatible in Rust. For the most part, this documentation is not adding any new guarantees, only describing the existing state of compatibility.
The one new addition is that it is now guaranteed that char
and u32
are ABI compatible. They have always had the same size and alignment, but now they are considered equivalent even in function call ABI, consistent with the documentation above.
For debugging purposes, any::type_name::() has been available since Rust 1.38 to return a string description of the type T
, but that requires an explicit type parameter. It is not always easy to specify that type, especially for unnameable types like closures or for opaque return types. The new type_name_of_val(&T)
offers a way to get a descriptive name from any reference to a type.
fn get_iter() -> impl Iterator<Item = i32> {
[1, 2, 3].into_iter()
}
fn main() {
let iter = get_iter();
let iter_name = std::any::type_name_of_val(&iter);
let sum: i32 = iter.sum();
println!("The sum of the `{iter_name}` is {sum}.");
}
This currently prints:
The sum of the `core::array::iter::IntoIter<i32, 3>` is 6.
std::collections::hash_map
.Check out everything that changed in Rust, Cargo, and Clippy.
Many people came together to create Rust 1.76.0. We couldn't have done it without all of you. Thanks!
Cargo and crates.io were developed in the rush leading up to the Rust 1.0 release to fill the needs for a tool to manage dependencies and a registry that people could use to share code. This rapid work resulted in these tools being connected with an API that initially didn't return the correct HTTP response status codes. After the Rust 1.0 release, Rust's stability guarantees around backward compatibility made this non-trivial to fix, as we wanted older versions of Cargo to continue working with the current crates.io API.
When an old version of Cargo receives a non-"200 OK" response, it displays the raw JSON body like this:
error: failed to get a 200 OK response, got 400
headers:
HTTP/1.1 400 Bad Request
Content-Type: application/json; charset=utf-8
Content-Length: 171
body:
{"errors":[{"detail":"missing or empty metadata fields: description, license. Please see https://doc.rust-lang.org/cargo/reference/manifest.html for how to upload metadata"}]}
This was improved in pull request #6771, which was released in Cargo 1.34 (mid-2019). Since then, Cargo has supported receiving 4xx and 5xx status codes too and extracts the error message from the JSON response, if available.
On 2024-03-04 we will switch the API from returning "200 OK" status codes for errors to the new 4xx/5xx behavior. Cargo 1.33 and below will keep working after this change, but will show the raw JSON body instead of a nicely formatted error message. We feel confident that this degraded error message display will not affect very many users. According to the crates.io request logs only very few requests are made by Cargo 1.33 and older versions.
This is the list of API endpoints that will be affected by this change:
GET /api/v1/crates
PUT /api/v1/crates/new
PUT /api/v1/crates/:crate/:version/yank
DELETE /api/v1/crates/:crate/:version/unyank
GET /api/v1/crates/:crate/owners
PUT /api/v1/crates/:crate/owners
DELETE /api/v1/crates/:crate/owners
All other endpoints have already been using regular HTTP status codes for some time.
If you are still using Cargo 1.33 or older, we recommend upgrading to a newer version to get the improved error messages and all the other nice things that the Cargo team has built since then.
The Rust team is happy to announce a new version of Rust, 1.75.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.75.0 with:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.75.0.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Please report any bugs you might come across!
async fn
and return-position impl Trait
in traitsAs announcedlast week, Rust 1.75 supports use of async fn
and -> impl Trait
in traits. However, this initial release comes with some limitations that are described in the announcement post.
It's expected that these limitations will be lifted in future releases.
Raw pointers (*const T
and *mut T
) used to primarily support operations operating in units of T
. For example, <*const T>::add(1)
would addsize_of::<T>()
bytes to the pointer's address. In some cases, working with byte offsets is more convenient, and these new APIs avoid requiring callers to cast to *const u8
/*mut u8
first.
The Rust compiler continues to get faster, with this release including the application ofBOLT to our binary releases, bringing a 2% mean wall time improvements on our benchmarks. This tool optimizes the layout of the librustc_driver.so
library containing most of the rustc code, allowing for better cache utilization.
We are also now building rustc with -Ccodegen-units=1
, which provides more opportunity for optimizations in LLVM. This optimization brought a separate 1.5% wall time mean win to our benchmarks.
In this release these optimizations are limited to x86_64-unknown-linux-gnu
compilers, but we expect to expand that over time to include more platforms.
These APIs are now stable in const contexts:
Check out everything that changed in Rust, Cargo, and Clippy.
Many people came together to create Rust 1.75.0. We couldn't have done it without all of you. Thanks!
The Rust Async Working Group is excited to announce major progress towards our goal of enabling the use of async fn
in traits. Rust 1.75, which hits stable next week, will include support for both -> impl Trait
notation and async fn
in traits.
This is a big milestone, and we know many users will be itching to try these out in their own code. However, we are still missing some important features that many users need. Read on for recommendations on when and how to use the stabilized features.
Ever since the stabilization of RFC #1522 in Rust 1.26, Rust has allowed users to write impl Trait
as the return type of functions (often called "RPIT"). This means that the function returns "some type that implements Trait
". This is commonly used to return closures, iterators, and other types that are complex or impossible to write explicitly.
/// Given a list of players, return an iterator
/// over their names.
fn player_names(
players: &[Player]
) -> impl Iterator<Item = &String> {
players
.iter()
.map(|p| &p.name)
}
Starting in Rust 1.75, you can use return-position impl Trait
in trait (RPITIT) definitions and in trait impls. For example, you could use this to write a trait method that returns an iterator:
trait Container {
fn items(&self) -> impl Iterator<Item = Widget>;
}
impl Container for MyContainer {
fn items(&self) -> impl Iterator<Item = Widget> {
self.items.iter().cloned()
}
}
So what does all of this have to do with async functions? Well, async functions are "just sugar" for functions that return -> impl Future
. Since these are now permitted in traits, we also permit you to write traits that use async fn
.
trait HttpService {
async fn fetch(&self, url: Url) -> HtmlBody;
// ^^^^^^^^ desugars to:
// fn fetch(&self, url: Url) -> impl Future<Output = HtmlBody>;
}
-> impl Trait
in public traitsThe use of -> impl Trait
is still discouraged for general use in public traits and APIs for the reason that users can't put additional bounds on the return type. For example, there is no way to write this function in a way that is generic over the Container
trait:
fn print_in_reverse(container: impl Container) {
for item in container.items().rev() {
// ERROR: ^^^
// the trait `DoubleEndedIterator`
// is not implemented for
// `impl Iterator<Item = Widget>`
eprintln!("{item}");
}
}
Even though some implementations might return an iterator that implements DoubleEndedIterator
, there is no way for generic code to take advantage of this without defining another trait. In the future we plan to add a solution for this. For now, -> impl Trait
is best used in internal traits or when you're confident your users won't need additional bounds. Otherwise you should consider using an associated type.1
async fn
in public traitsSince async fn
desugars to -> impl Future
, the same limitations apply. In fact, if you use bare async fn
in a public trait today, you'll see a warning.
warning: use of `async fn` in public traits is discouraged as auto trait bounds cannot be specified
--> src/lib.rs:7:5
|
7 | async fn fetch(&self, url: Url) -> HtmlBody;
| ^^^^^
|
help: you can desugar to a normal `fn` that returns `impl Future` and add any desired bounds such as `Send`, but these cannot be relaxed without a breaking API change
|
7 - async fn fetch(&self, url: Url) -> HtmlBody;
7 + fn fetch(&self, url: Url) -> impl std::future::Future<Output = HtmlBody> + Send;
|
Of particular interest to users of async are Send
bounds on the returned future. Since users cannot add bounds later, the error message is saying that you as a trait author need to make a choice: Do you want your trait to work with multithreaded, work-stealing executors?
Thankfully, we have a solution that allows using async fn
in public traits today! We recommend using the trait_variant::make
proc macro to let your users choose. This proc macro is part of the trait-variant crate, published by the rust-lang org. Add it to your project with cargo add trait-variant
, then use it like so:
#[trait_variant::make(HttpService: Send)]
pub trait LocalHttpService {
async fn fetch(&self, url: Url) -> HtmlBody;
}
This creates two versions of your trait: LocalHttpService
for single-threaded executors and HttpService
for multithreaded work-stealing executors. Since we expect the latter to be used more commonly, it has the shorter name in this example. It has additional Send bounds:
pub trait HttpService: Send {
fn fetch(
&self,
url: Url,
) -> impl Future<Output = HtmlBody> + Send;
}
This macro works for async because impl Future
rarely requires additional bounds other than Send, so we can set our users up for success. See the FAQ below for an example of where this is needed.
Traits that use -> impl Trait
and async fn
are not object-safe, which means they lack support for dynamic dispatch. We plan to provide utilities that enable dynamic dispatch in an upcoming version of the trait-variant
crate.
In the future we would like to allow users to add their own bounds to impl Trait
return types, which would make them more generally useful. It would also enable more advanced uses of async fn
. The syntax might look something like this:
trait HttpService = LocalHttpService<fetch(): Send> + Send;
Since these aliases won't require any support on the part of the trait author, it will technically make the Send variants of async traits unnecessary. However, those variants will still be a nice convenience for users, so we expect that most crates will continue to provide them.
Of course, the goals of the Async Working Group don't stop with async fn
in traits. We want to continue building features on top of it that enable more reliable and sophisticated use of async Rust, and we intend to publish a more extensive roadmap in the new year.
-> impl Trait
in traits?For private traits you can use -> impl Trait
freely. For public traits, it's best to avoid them for now unless you can anticipate all the bounds your users might want (in which case you can use #[trait_variant::make]
, as we do for async). We expect to lift this restriction in the future.
#[async_trait]
macro?There are a couple of reasons you might need to continue using async-trait:
As stated above, we hope to enable dynamic dispatch in a future version of the trait-variant
crate.
async fn
in traits? What are the limitations?Assuming you don't need to use #[async_trait]
for one of the reasons stated above, it's totally fine to use regular async fn
in traits. Just remember to use #[trait_variant::make]
if you want to support multithreaded runtimes.
The biggest limitation is that a type must always decide if it implements the Send or non-Send version of a trait. It cannot implement the Send version conditionally on one of its generics. This can come up in the middleware pattern, for example, RequestLimitingService<T>
that is HttpService if T: HttpService
.
#[trait_variant::make]
and Send
bounds?In simple cases you may find that your trait appears to work fine with a multithreaded executor. There are some patterns that just won't work, however. Consider the following:
fn spawn_task(service: impl HttpService + 'static) {
tokio::spawn(async move {
let url = Url::from("https://rust-lang.org");
let _body = service.fetch(url).await;
});
}
Without Send bounds on our trait, this would fail to compile with the error: "future cannot be sent between threads safely". By creating a variant of your trait with Send bounds, you avoid sending your users into this trap.
Note that you won't see a warning if your trait is not public, because if you run into this problem you can always add the Send bounds yourself later.
For a more thorough explanation of the problem, see this blog post.2
Yes, you can freely move between the async fn
and -> impl Future
spelling in your traits and impls. This is true even when one form has a Send bound.3 This makes the traits created by trait_variant
nicer to use.
trait HttpService: Send {
fn fetch(&self, url: Url)
-> impl Future<Output = HtmlBody> + Send;
}
impl HttpService for MyService {
async fn fetch(&self, url: Url) -> HtmlBody {
// This works, as long as `do_fetch(): Send`!
self.client.do_fetch(url).await.into_body()
}
}
impl Future + '_
?For -> impl Trait
in traits we adopted the 2024 Capture Rules early. This means that the + '_
you often see today is unnecessary in traits, because the return type is already assumed to capture input lifetimes. In the 2024 edition this rule will apply to all function signatures. See the linked RFC for more.
-> impl Trait
?If your impl signature includes more detailed information than the trait itself, you'll get a warning:
pub trait Foo {
fn foo(self) -> impl Debug;
}
impl Foo for u32 {
fn foo(self) -> String {
// ^^^^^^
// warning: impl trait in impl method signature does not match trait method signature
self.to_string()
}
}
The reason is that you may be leaking more details of your implementation than you meant to. For instance, should the following code compile?
fn main() {
// Did the implementer mean to allow
// use of `Display`, or only `Debug` as
// the trait says?
println!("{}", 32.foo());
}
Thanks to refined trait implementations it does compile, but the compiler asks you to confirm your intent to refine the trait interface with #[allow(refining_impl_trait)]
on the impl.
The Async Working Group is excited to end 2023 by announcing the completion of our primary goal for the year! Thank you to everyone who helpfully participated in design, implementation, and stabilization discussions. Thanks also to the users of async Rust who have given great feedback over the years. We're looking forward to seeing what you build, and to delivering continued improvements in the years to come.
async fn
in traits, but we decided to cut that from the scope and ship the trait-variant
crate instead. ↩It’s time for the 2023 State of Rust Survey!
Since 2016, the Rust Project has collected valuable information and feedback from the Rust programming language community through our annual State of Rust Survey. This tool allows us to more deeply understand how the Rust Project is performing, how we can better serve the global Rust community, and who our community is composed of.
Like last year, the 2023 State of Rust Survey will likely take you between 10 and 25 minutes, and responses are anonymous. We will accept submissions until Monday, January 15th, 2024. Trends and key insights will be shared on blog.rust-lang.org as soon as possible in 2024.
We invite you to take this year’s survey whether you have just begun using Rust, you consider yourself an intermediate to advanced user, or you have not yet used Rust but intend to one day. Your responses will help us improve Rust over time by shedding light on gaps to fill in the community and development priorities, and more.
Once again, we are offering the State of Rust Survey in the following languages (if you speak multiple languages, please pick one). Language options are available on the main survey page:
Please help us spread the word by sharing the survey link via your social media networks, at meetups, with colleagues, and in any other community that makes sense to you.
This survey would not be possible without the time, resources, and attention of members of the Survey Working Group, the Rust Foundation, and other collaborators. Thank you!
If you have any questions, please see our frequently asked questions.
We appreciate your participation!
Click here to read a summary of last year's survey findings.
The year 2024 is soon to be upon us, and as long-time Rust aficionados know, that means that a new Edition of Rust is on the horizon!
You may be aware that a new version of Rust is released every six weeks. New versions of the language can both add things as well as change things, but only in backwards-compatible ways, according to Rust's1.0 stability guarantee.
But does that mean that Rust can never make backwards-incompatible changes? Not quite! This is what an Edition is: Rust's mechanism for introducing backwards-incompatible changes in a backwards-compatible way. If that sounds like a contradiction, there are three key properties of Editions that preserve the stability guarantee:
In order to keep churn to a minimum, a new Edition of Rust is only released once every three years. We've had the 2015 Edition, the 2018 Edition, the 2021 Edition, and soon, the 2024 Edition. And we could use your help!
We know how much you love Rust, but let's be honest, no language is perfect, and Rust is no exception. So if you've got ideas for how Rust could be better if only that pesky stability guarantee weren't around, now's the time to share! Also note that potential Edition-related changes aren't just limited to the language itself: we'll also consider changes to both Cargo and rustfmt as well.
Please keep in mind that the following criteria determine the sort of changes we're looking for:
cargo fix
), in order to make upgrading to a new Edition as painless as possible.To spark your imagination, here's a real-world example. In the 2015 and 2018 Editions, iterating over a fixed-length array via [foo].into_iter()
will yield references to the iterated elements; this is is surprising because, on other types, calling .into_iter()
produces an iteratorthat yields owned values rather than references. This limitation existed because older versions of Rust lacked the ability to implement traits for all possible fixed-length arrays in a generic way. Once Rust finally became able to express this,all Editions at last gained the ability to iterate over owned values in fixed-length arrays; however, in the specific case of [foo].into_iter()
, altering the existing behavior would have broken lots of code in the wild. Therefore, we used the 2021 Edition to fix this inconsistency for the specific case of [foo].into_iter()
, allowing us to address this long-standing issue while preserving Rust's stability guarantees.
Just like other changes to Rust, Edition-related proposals follow the RFC process, as documented in the Rust RFCs repository. Please follow the process documented there, and please consider publicizing a draft of your RFC to collect preliminary feedback before officially submitting it, in order to expedite the RFC process once you've filed it for real! (And in addition to the venues mentioned in the prior link, please feel free to announce your pre-RFC to our Zulip channel.)
Please file your RFCs as soon as possible! Our goal is to release the 2024 Edition in the second half of 2024, which means we would like to get everything implemented(not only the features themselves, but also all the Edition-related migration tooling) by the end of May, which means that RFCs should be accepted by the end of February. And since RFCs take time to discuss and consider, we strongly encourage you to have your RFC filed by the end of December, or the first week of January at the very latest.
We hope to have periodic updates on the ongoing development of the 2024 Edition. In the meantime, if you have any questions or if you would like to help us make the new Edition a reality, we invite you to come chat in the #edition channel in the Rust Zulip.
Cargo has recently gained an unstable feature on the nightly channel (starting with nightly-2023-11-17) to perform automatic cleaning of cache content within Cargo's home directory. This post includes:
In short, we are asking people who use the nightly channel to enable this feature and report any issues you encounter on the Cargo issue tracker. To enable it, place the following in your Cargo config file (typically located in ~/.cargo/config.toml
or %USERPROFILE%\.cargo\config.toml
for Windows):
[unstable]
gc = true
Or set the CARGO_UNSTABLE_GC=true
environment variable or use the -Zgc
CLI flag to turn it on for individual commands.
We'd particularly like people who use unusual filesystems or environments to give it a try, since there are some parts of the implementation which are sensitive and need battle testing before we turn it on for everyone.
Cargo keeps a variety of cached data within the Cargo home directory. This cache can grow unbounded and can get quite large (easily reaching many gigabytes). Community members have developed tools to manage this cache, such as cargo-cache, but cargo itself never exposed any ability to manage it.
This cache includes:
.crate
files downloaded from a registry..crate
files, which rustc
uses to read the source and compile dependencies.The new garbage collection ("GC") feature adds tracking of this cache data so that cargo can automatically or manually remove unused files. It keeps an SQLite database which tracks the last time the various cache elements have been used. Every time you run a cargo command that reads or writes any of this cache data, it will update the database with a timestamp of when that data was last used.
What isn't yet included is cleaning of target directories, see Plan for the future.
When you run cargo, once a day it will inspect the last-use cache tracker, and determine if any cache elements have not been used in a while. If they have not, then they will be automatically deleted. This happens with most commands that would normally perform significant work, like cargo build
or cargo fetch
.
The default is to delete data that can be locally recreated if it hasn't been used for 1 month, and to delete data that has to be re-downloaded after 3 months.
Automatic deletion is disabled if cargo is offline such as with --offline
or --frozen
to avoid deleting artifacts that may need to be used if you are offline for a long period of time.
The initial implementation has exposed a variety of configuration knobs to control how automatic cleaning works. However, it is unlikely we will expose too many low-level details when it is stabilized, so this may change in the future (see issue #13061). See the Automatic garbage collection section for more details on this configuration.
If you want to manually delete data from the cache, several options have been added under the cargo clean gc
subcommand. This subcommand can be used to perform the normal automatic daily cleaning, or to specify different options on which data to remove. There are several options for specifying the age of data to delete (such as --max-download-age=3days
) or specifying the maximum size of the cache (such as --max-download-size=1GiB
). See the Manual garbage collection section or run cargo clean gc --help
for more details on which options are supported.
This CLI design is only preliminary, and we are looking at determining what the final design will look like when it is stabilized, see issue #13060.
After enabling the gc feature, just go about your normal business of using cargo. You should be able to observe the SQLite database stored in your cargo home directory at ~/.cargo/.global-cache
.
After the first time you use cargo, it will populate the database tracking all the data that already exists in your cargo home directory. Then, after 1 month, cargo should start deleting old data, and after 3 months will delete even more data.
The end result is that after that period of time you should start to notice the home directory using less space overall.
You can also try out the cargo clean gc
command and explore some of its options if you want to try to manually delete some data.
If you run into problems, you can disable the gc feature and cargo should return to its previous behavior. Please let us know on the issue tracker if this happens.
We'd like to hear from you about your experience using this feature. Some of the things we are interested in are:
Or if you would prefer to share your experiences on Zulip, head over to the #t-cargo stream.
(These sections are only for the intently curious among you.)
The implementation of this feature had to consider several constraints to try to ensure that it works in nearly all environments, and doesn't introduce a negative experience for users.
One big focus was to make sure that the performance of each invocation of cargo
is not significantly impacted. Cargo needs to potentially save a large chunk of data every time it runs. The performance impact will heavily depend on the number of dependencies and your filesystem. Preliminary testing shows the impact can be anywhere from 0 to about 50ms.
In order to minimize the performance impact of actually deleting files, the automatic GC runs only once a day. This is intended to balance keeping the cache clean without impacting the performance of daily use.
Another big focus is dealing with cache locking. Previously, cargo had a single lock on the package cache, which cargo would hold while downloading registry data and performing dependency resolution. When cargo is actually running rustc
, it previously did not hold a lock under the assumption that existing cache data will not be modified.
However, now that cargo can modify or delete existing cache data, it needs to be careful to coordinate with anything that might be reading from the cache, such as if multiple cargo commands are run simultaneously. To handle this, cargo now has two separate locks, which are used together to provide three separate locking states. There is a shared read lock, which allows multiple builds to run in parallel and read from the cache. There is a write lock held while downloading registry data, which is independent of the read lock which allows concurrent builds to still run while new packages are downloaded. The third state is a write lock that prevents either of the two previous locks from being held, and ensures exclusive access while cleaning the cache.
Versions of cargo before 1.75 don't know about the exclusive write lock. We are hoping that in practice it will be rare to concurrently run old and new cargo versions, and that it is unlikely that the automatic GC will need to delete data that is concurrently in use by an older version.
Because we do not want problems with GC from disrupting users, the implementation silently skips the GC if it is unable to acquire an exclusive lock on the package cache. Similarly, when cargo saves the timestamp data on every command, it will silently ignore errors if it is unable to open the database, such as if it is on a read-only filesystem, or it is unable to acquire a write lock. This may result in the last-use timestamps becoming stale, but hopefully this should not impact most usage scenarios. For locking, we are paying special attention to scenarios such as Docker container mounts and network filesystems with questionable locking support.
Since the cache is used by any version of cargo, we have to pay close attention to forwards and backwards compatibility. We benefit from SQLite's particularly stable on-disk data format which has been stable since 2004. Cargo has support to do schema migrations within the database that stay backwards compatible.
A major aspect of this endeavor is to gain experience with using SQLite in a wide variety of environments, with a plan to extend its usage in several other parts of cargo.
One place where we are looking to introduce SQLite is for the registry index cache. When cargo downloads registry index data, it stores it in a custom-designed binary file format to improve lookup performance. However, this index cache uses many small files, which may not perform well on some filesystems.
Additionally, the index cache grows without bound. Currently the automatic cache cleaning will only delete an entire index cache if the index itself hasn't been used, which is rarely the case for crates.io. We may also need to consider finer-grained timestamp tracking or some mechanism to periodically purge this data.
Another place we are looking to introduce SQLite is for managing the target directory. In cargo's target directory, cargo keeps track of information about each crate that has been built with what is called a fingerprint. These fingerprints help cargo know if it needs to recompile something. Each artifact is tracked with a set of 4 files, using a mixture of custom formats.
We are looking to replace this system with SQLite which will hopefully bring about several improvements. A major focus will be to provide cleaning of stale data in the target directory, which tends to use substantial amount of disk space. Additionally we are looking to implement other improvements, such as more accurate fingerprint tracking, provide information about why cargo thinks something needed to be recompiled, and to hopefully improve performance. This will be important for the script feature, which uses a global cache for build artifacts, and the future implementation of a globally-shared build cache.
The year 2024 is soon to be upon us, and as long-time Rust aficionados know, that means that a new Edition of Rust is on the horizon!
You may be aware that a new version of Rust is released every six weeks. New versions of the language can both add things as well as change things, but only in backwards-compatible ways, according to Rust's1.0 stability guarantee.
But does that mean that Rust can never make backwards-incompatible changes? Not quite! This is what an Edition is: Rust's mechanism for introducing backwards-incompatible changes in a backwards-compatible way. If that sounds like a contradiction, there are three key properties of Editions that preserve the stability guarantee:
In order to keep churn to a minimum, a new Edition of Rust is only released once every three years. We've had the 2015 Edition, the 2018 Edition, the 2021 Edition, and soon, the 2024 Edition. And we could use your help!
We know how much you love Rust, but let's be honest, no language is perfect, and Rust is no exception. So if you've got ideas for how Rust could be better if only that pesky stability guarantee weren't around, now's the time to share! Also note that potential Edition-related changes aren't just limited to the language itself: we'll also consider changes to both Cargo and rustfmt as well.
Please keep in mind that the following criteria determine the sort of changes we're looking for:
cargo fix
), in order to make upgrading to a new Edition as painless as possible.To spark your imagination, here's a real-world example. In the 2015 and 2018 Editions, iterating over a fixed-length array via [foo].into_iter()
will yield references to the iterated elements; this is is surprising because, on other types, calling .into_iter()
produces an iteratorthat yields owned values rather than references. This limitation existed because older versions of Rust lacked the ability to implement traits for all possible fixed-length arrays in a generic way. Once Rust finally became able to express this,all Editions at last gained the ability to iterate over owned values in fixed-length arrays; however, in the specific case of [foo].into_iter()
, altering the existing behavior would have broken lots of code in the wild. Therefore, we used the 2021 Edition to fix this inconsistency for the specific case of [foo].into_iter()
, allowing us to address this long-standing issue while preserving Rust's stability guarantees.
Just like other changes to Rust, Edition-related proposals follow the RFC process, as documented in the Rust RFCs repository. Please follow the process documented there, and please consider publicizing a draft of your RFC to collect preliminary feedback before officially submitting it, in order to expedite the RFC process once you've filed it for real! (And in addition to the venues mentioned in the prior link, please feel free to announce your pre-RFC to our Zulip channel.)
Please file your RFCs as soon as possible! Our goal is to release the 2024 Edition in the second half of 2024, which means we would like to get everything implemented(not only the features themselves, but also all the Edition-related migration tooling) by the end of May, which means that RFCs should be accepted by the end of February. And since RFCs take time to discuss and consider, we strongly encourage you to have your RFC filed by the end of December, or the first week of January at the very latest.
We hope to have periodic updates on the ongoing development of the 2024 Edition. In the meantime, if you have any questions or if you would like to help us make the new Edition a reality, we invite you to come chat in the #edition channel in the Rust Zulip.
The Rust team has published a new point release of Rust, 1.74.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.74.1 is as easy as:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website.
1.74.1 resolves a few regressions introduced in 1.74.0:
Many people came together to create Rust 1.74.1. We couldn't have done it without all of you. Thanks!
The Rust team is happy to announce a new version of Rust, 1.74.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.74.0 with:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.74.0 on GitHub.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Please report any bugs you might come across!
As proposed in RFC 3389, the Cargo.toml
manifest now supports a [lints]
table to configure the reporting level (forbid, deny, warn, allow) for lints from the compiler and other tools. So rather than setting RUSTFLAGS
with -F
/-D
/-W
/-A
, which would affect the entire build, or using crate-level attributes like:
#![forbid(unsafe_code)]
#![deny(clippy::enum_glob_use)]
You can now write those in your package manifest for Cargo to handle:
[lints.rust]
unsafe_code = "forbid"
[lints.clippy]
enum_glob_use = "deny"
These can also be configured in a [workspace.lints]
table, then inherited by[lints] workspace = true
like many other workspace settings. Cargo will also track changes to these settings when deciding which crates need to be rebuilt.
For more information, see the lints and workspace.lints sections of the Cargo reference manual.
Two more related Cargo features are included in this release: credential providers and authenticated private registries.
Credential providers allow configuration of how Cargo gets credentials for a registry. Built-in providers are included for OS-specific secure secret storage on Linux, macOS, and Windows. Additionally, custom providers can be written to support arbitrary methods of storing or generating tokens. Using a secure credential provider reduces risk of registry tokens leaking.
Registries can now optionally require authentication for all operations, not just publishing. This enables private Cargo registries to offer more secure hosting of crates. Use of private registries requires the configuration of a credential provider.
For further information, see theCargo docs.
If you have ever received the error that a "return type cannot contain a projection or Self
that references lifetimes from a parent scope," you may now rest easy! The compiler now allows mentioning Self
and associated types in opaque return types, like async fn
and -> impl Trait
. This is the kind of feature that gets Rust closer to how you might just_expect_ it to work, even if you have no idea about jargon like "projection".
This functionality had an unstable feature gate because its implementation originally didn't properly deal with captured lifetimes, and once that was fixed it was given time to make sure it was sound. For more technical details, see the stabilization pull request, which describes the following examples that are all now allowed:
struct Wrapper<'a, T>(&'a T);
// Opaque return types that mention `Self`:
impl Wrapper<'_, ()> {
async fn async_fn() -> Self { /* ... */ }
fn impl_trait() -> impl Iterator<Item = Self> { /* ... */ }
}
trait Trait<'a> {
type Assoc;
fn new() -> Self::Assoc;
}
impl Trait<'_> for () {
type Assoc = ();
fn new() {}
}
// Opaque return types that mention an associated type:
impl<'a, T: Trait<'a>> Wrapper<'a, T> {
async fn mk_assoc() -> T::Assoc { /* ... */ }
fn a_few_assocs() -> impl Iterator<Item = T::Assoc> { /* ... */ }
}
These APIs are now stable in const contexts:
Check out everything that changed in Rust, Cargo, and Clippy.
Many people came together to create Rust 1.74.0. We couldn't have done it without all of you. Thanks!
The Rust compiler's front-end can now use parallel execution to significantly reduce compile times. To try it, run the nightly compiler with the -Z threads=8
option. This feature is currently experimental, and we aim to ship it in the stable compiler in 2024.
Keep reading to learn why a parallel front-end is needed and how it works, or just skip ahead to the How to use itsection.
Rust compile times are a perennial concern. The Compiler Performance Working Grouphas continually improved compiler performance for several years. For example, in the first 10 months of 2023, there were mean reductions in compile time of13%, in peak memory use of15%, and in binary size of7%, as measured by our performance suite.
However, at this point the compiler has been heavily optimized and new improvements are hard to find. There is no low-hanging fruit remaining.
But there is one piece of large but high-hanging fruit: parallelism. Current Rust compiler users benefit from two kinds of parallelism, and the newly parallel front-end adds a third kind.
When you compile a Rust program, Cargo launches multiple rustc processes, compiling multiple crates in parallel. This works well. Try compiling a large Rust program with the -j1
flag to disable this parallelization and it will take a lot longer than normal.
You can visualise this parallelism if you build with Cargo's--timings flag, which produces a chart showing how the crates are compiled. The following image shows the timeline when building ripgrep on a machine with 28 virtual cores.
There are 60 horizontal lines, each one representing a distinct process. Their durations range from a fraction of a second to multiple seconds. Most of them are rustc, and the few orange ones are build scripts. The first twenty run all start at the same time. This is possible because there are no dependencies between the relevant crates. But further down the graph, parallelism reduces as crate dependencies increase. Although the compiler can overlap compilation of dependent crates somewhat thanks to a feature called pipelined compilation, there is much less parallel execution happening towards the end of compilation, and this is typical for large Rust programs. Interprocess parallelism is not enough to take full advantage of many cores. For more speed, we need parallelism within each process.
The compiler is split into two halves: the front-end and the back-end.
The front-end does many things, including parsing, type checking, and borrow checking. Until this week, it could not use parallel execution.
The back-end performs code generation. It generates code in chunks called "codegen units" and then LLVM processes these in parallel. This is a form of coarse-grained parallelism.
We can visualize the difference between the serial front-end and the parallel back-end. The following image shows the output of a profiler calledSamply measuring rustc as it does a release build of the final crate in Cargo. The image is superimposed with markers that indicate front-end and back-end execution.
Each horizontal line represents a thread. The main thread is labelled "rustc" and is shown at the bottom. It is busy for most of the execution. The other 16 threads are LLVM threads, labelled "opt cgu.00" through to "opt cgu.15". There are 16 threads because 16 is the default number of codegen units for a release build.
There are several things worth noting.
The front-end is now capable of parallel execution. It usesRayon to perform compilation tasks using fine-grained parallelism. Many data structures are synchronized by mutexes and read-write locks, atomic types are used where appropriate, and many front-end operations are made parallel. The addition of parallelism was done by modifying a relatively small number of key points in the code. The vast majority of the front-end code did not need to be changed.
When the parallel front-end is enabled and configured to use eight threads, we get the following Samply profile when compiling the same example as before.
Again, there are several things worth nothing.
Rust compilation has long benefited from interprocess parallelism, via Cargo, and from intraprocess parallelism in the back-end. It can now also benefit from intraprocess parallelism in the front-end.
You might wonder how interprocess parallelism and intraprocess parallelism interact. If we have 20 parallel rustc invocations and each one can have up to 16 threads running, could we end up with hundreds of threads on a machine with only tens of cores, resulting in inefficient execution as the OS tries its best to schedule them?
Fortunately no. The compiler uses the jobserver protocolto limit the number of threads it creates. If a lot of interprocess parallelism is occuring, intraprocess parallelism will be limited appropriately, and the number of threads will not exceed the number of cores.
The nightly compiler is now shipping with the parallel front-end enabled. However, by default it runs in single-threaded mode and won't reduce compile times.
Keen users can opt into multi-threaded mode with the -Z threads
option. For example:
$ RUSTFLAGS="-Z threads=8" cargo build --release
Alternatively, to opt in from aconfig.toml file (for one or more projects), add these lines:
[build]
rustflags = ["-Z", "threads=8"]
It may be surprising that single-threaded mode is the default. Why parallelize the front-end and then run it in single-threaded mode? The answer is simple: caution. This is a big change! The parallel front-end has a lot of new code. Single-threaded mode exercises most of the new code, but excludes the possibility of threading bugs such as deadlocks that can affect multi-threaded mode. Even in Rust, parallel programs are harder to write correctly than serial programs. For this reason the parallel front-end also won't be shipped in beta or stable releases for some time.
When the parallel front-end is run in single-threaded mode, compilation times are typically 0% to 2% slower than with the serial front-end. This should be barely noticeable.
When the parallel front-end is run in multi-threaded mode with -Z threads=8
, our measurements on real-world code show that compile times can be reduced by up to 50%, though the effects vary widely and depend on the characteristics of the code and its build configuration. For example, dev builds are likely to see bigger improvements than release builds because release builds usually spend more time doing optimizations in the back-end. A small number of cases compile more slowly in multi-threaded mode than single-threaded mode. These are mostly tiny programs that already compile quickly.
We recommend eight threads because this is the configuration we have tested the most and it is known to give good results. Values lower than eight will see smaller benefits. Values greater than eight will give diminishing returns and may even give worse performance.
If a 50% improvement seems low when going from one to eight threads, recall from the explanation above that the front-end only accounts for part of compile times, and the back-end is already parallel. You can't beat Amdahl's Law.
Memory usage can increase significantly in multi-threaded mode. We have seen increases of up to 35%. This is unsurprising given that various parts of compilation, each of which requires a certain amount of memory, are now executing in parallel.
Reliability in single-threaded mode should be high.
In multi-threaded mode there are some known bugs, including deadlocks. If compilation hangs, you have probably hit one of them.
If you have any problems with the parallel front-end, please check the issues marked with the "WG-compiler-parallel" label. If your problem does not match any of the existing issues, please file a new issue.
For more general feedback, please start a discussion on the wg-parallel-rustc Zulip channel. We are particularly interested to hear the performance effects on the code you care about.
We are working to improve the performance of the parallel front-end. As the graphs above showed, there is room to improve the utilization of the threads in the front-end. We are also ironing out the remaining bugs in multi-threaded mode.
We aim to stabilize the -Z threads
option and ship the parallel front-end running by default in multi-threaded mode on stable releases in 2024.
The parallel front-end has been under development for a long time. It was started by @Zoxc, who also did most of the work for several years. After a period of inactivity, the project was revived this year by @SparrowLii, who led the effort to get it shipped. Other members of the Parallel Rustc Working Group have also been involved with reviews and other activities. Many thanks to everyone involved.
The "non-canonical downloads" feature allows everyone to download the serde_derive
crate from https://crates.io/api/v1/crates/serde%5Fderive/1.0.189/download, but also from https://crates.io/api/v1/crates/SERDE-derive/1.0.189/download, where the underscore was replaced with a hyphen (crates.io normalizes underscores and hyphens to be the same for uniqueness purposes, so it isn't possible to publish a crate named serde-derive
because serde_derive
exists) and parts of the crate name are using uppercase characters. The same also works vice versa, if the canonical crate name uses hyphens and the download URL uses underscores instead. It even works with any other combination for crates that have multiple such characters (please don't mix them…!).
Supporting such non-canonical download requests means that the crates.io server needs to perform a database lookup for every download request to figure out the canonical crate name. The canonical crate name is then used to construct a download URL and the client is HTTP-redirected to that URL.
While we have introduced a caching layer some time ago to address some of the performance concerns, having all download requests go through our backend servers has still started to become problematic and at the current rate of growth will not become any easier in the future.
Having to support "non-canonical downloads" however prevents us from using CDNs directly for the download requests, so if we can remove support for non-canonical download requests, it will unlock significant performance and reliability gains.
cargo
always uses the canonical crate name from the package index to construct the corresponding download URLs. If support was removed for this on the crates.io side then cargo would still work exactly the same as before.
Looking at the crates.io request logs, the following user-agents are currently relying on "non-canonical downloads" support:
Three of these are just generic HTTP client libraries. GNU Guile is apparently a programming language, so most likely this is also a generic user-agent from a custom user program.
cargo-binstall is a tool enabling installation of binary artifacts of crates. The maintainer is already aware of the upcoming change and confirmed that more recent versions of cargo-binstall
should not be affected by this change.
We recommend that any scripts relying on non-canonical downloads be adjusted to use the canonical names from the package index, the database dump, or the crates.io API instead. If you don't know which data source is best suited for you, we welcome you to take a look at the crates.io data access page.
Note that we will still need the database query for download counting purposes for now. We have plans to remove this requirement as well, but those efforts are blocked by us still supporting non-canonical downloads.
If you want to follow the progress on implementing these changes or if you have comments you can subscribe to the corresponding tracking issue. Related discussions are also happening on the crates.io Zulip stream.
Around mid-October of 2023 the crates.io team was notified by one of our users that a shields.io badge for their crate stopped working. The issue reporter was kind enough to already debug the problem and figured out that the API request that shields.io sends to crates.io was most likely the problem. Here is a quote from the original issue:
This crate makes heavy use of feature flags which bloat the response payload of the API.
Apparently the API response for this specific crate had broken the 20 MB mark and shields.io wasn't particularly happy with this. Interestingly, this crate only had 9 versions published at this point in time. But how do you get to 20 MB with only 9 published versions?
As the quote above already mentions, this crate is using features… a lot of features… almost 23,000! 😱
What crate needs that many features? Well, this crate provides SVG icons for Rust-based web applications… and it uses one feature per icon so that the payload size of the final WebAssembly bundle stays small.
At first glance there should be nothing wrong with this. This seems like a reasonable thing to do from a crate author perspective and neither cargo, nor crates.io, were showing any warnings about this. Unfortunately, some of the internals are not too happy about such a high number of features…
The first problem that was already identified by the crate author: the API responses from crates.io are getting veeeery large. Adding to the problem is the fact that the crates.io API currently does not paginate the list of published versions. Changing this is obviously a breaking change, so our team had been a bit reluctant to change the behavior of the API in that regard, though this situation has shown that we will likely have to tackle this problem in the near future.
The next problem is that the index file for this crate is also getting large. With 9 published versions it already contains 11 MB of data. And just like the crates.io API, there is currently no pagination built into the package index file format.
Now you may ask, why do the package index and cargo need to know about features? Well, the easy answer is: for dependency resolution. Features can enable optional dependencies, so when a dependency feature is used it might influence the dependency resolution. Our initial thought was that we could at least drop all empty feature declarations from the index file (e.g. foo = []
), but the cargo team informed us that cargo relies on them being available there too, and so for backwards-compatibility reasons this is not an option.
On the bright side, most Rust users are on cargo versions these days that use the sparse package index by default, which only downloads index files for packages actually being used. In other words: only users of this icon crate need to pay the price for downloading all the metadata. On the flipside, this means users who are still using the git-based index are all paying for this one crate using 23,000 features.
So, where do we go from here? 🤔
While we believe that supporting such a high number of features is conceptually a valid request, with the current implementation details in crates.io and cargo we cannot support this. After analyzing all of these downstream effects from a single crate having that many features, we realized we need some form of restriction on crates.io to keep the system from falling apart.
Now comes the important part: on 2023-10-16 the crates.io team deployed a change limiting the number of features a crate can have to 300 for any new crates/versions being published.
… for now, or at least until we have found solutions for the above problems.
We are aware of a couple of crates that also have legitimate reasons for having more than 300 features, and we have granted them appropriate exceptions to this rule, but we would like to ask everyone to be mindful of these limitations of our current systems.
We also invite everyone to participate in finding solutions to the above problems. The best place to discuss ideas is the crates.io Zulip stream, and once an idea is a bit more fleshed out it will then be transformed into an RFC.
Finally, we would like to thank Charles Edward Gagnon for making us aware of this problem. We also want to reiterate that the author and their crate are not to blame for this. It is hard to know of these crates.io implementation details when developing crates, so if anything, the blame would be on us, the crates.io team, for not having limits on this earlier. Anyway, we have them now, and now you all know why! 👋
We are happy to announce that we have completed the process to elect new Project Directors.
The new Project Directors are:
They will join Ryan Levick and Mark Rousskov to make up the five members of the Rust Foundation Board of Directors who represent the Rust Project.
The board is made up of Project Directors, who come from and represent the Rust Project, and Member Directors, who represent the corporate members of the Rust Foundation.
Both of these director groups have equal voting power.
We look forward to working with and being represented by this new group of project directors.
We were fortunate to have a number of excellent candidates and this was a difficult decision. We wish to express our gratitude to all of the candidates who were considered for this role! We also extend our thanks to the project as a whole who participated by nominating candidates and providing additional feedback once the nominees were published. Finally, we want to share our appreciation for the Project Director Elections Subcommittee for working to design and facilitate running this election process.
This was a challenging decision for a number of reasons.
This was also our first time doing this process and we learned a lot to use to improve it going forward. The Project Director Elections Subcommittee will be following up with a retrospective outlining how well we achieved our goals with this process and making suggestions for future elections. We are expecting another election next year to start a rotating cadence of 2-year terms. Project governance is about iterating and refining over time.
Once again, we thank all who were involved in this process and we are excited to welcome our new Project Directors.
The Rust team is happy to announce a new version of Rust, 1.73.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.73.0 with:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.73.0 on GitHub.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Please report any bugs you might come across!
The output produced by the default panic handler has been changed to put the panic message on its own line instead of wrapping it in quotes. This can make panic messages easier to read, as shown in this example:
fn main() {
let file = "ferris.txt";
panic!("oh no! {file:?} not found!");
}
Output before Rust 1.73:
thread 'main' panicked at 'oh no! "ferris.txt" not found!', src/main.rs:3:5
Output starting in Rust 1.73:
thread 'main' panicked at src/main.rs:3:5:
oh no! "ferris.txt" not found!
This is especially useful when the message is long, contains nested quotes, or spans multiple lines.
Additionally, the panic messages produced by assert_eq
and assert_ne
have been modified, moving the custom message (the third argument) and removing some unnecessary punctuation, as shown below:
fn main() {
assert_eq!("🦀", "🐟", "ferris is not a fish");
}
Output before Rust 1.73:
thread 'main' panicked at 'assertion failed: `(left == right)`
left: `"🦀"`,
right: `"🐟"`: ferris is not a fish', src/main.rs:2:5
Output starting in Rust 1.73:
thread 'main' panicked at src/main.rs:2:5:
assertion `left == right` failed: ferris is not a fish
left: "🦀"
right: "🐟"
As proposed in RFC 3184, LocalKey<Cell<T>>
and LocalKey<RefCell<T>>
can now be directly manipulated with get()
, set()
, take()
, and replace()
methods, rather than jumping through a with(|inner| ...)
closure as needed for general LocalKey
work. LocalKey<T>
is the type of thread_local!
statics.
The new methods make common code more concise and avoid running the extra initialization code for the default value specified in thread_local!
for new threads.
thread_local! {
static THINGS: Cell<Vec<i32>> = Cell::new(Vec::new());
}
fn f() {
// before:
THINGS.with(|i| i.set(vec![1, 2, 3]));
// now:
THINGS.set(vec![1, 2, 3]);
// ...
// before:
let v = THINGS.with(|i| i.take());
// now:
let v: Vec<i32> = THINGS.take();
}
These APIs are now stable in const contexts:
Check out everything that changed in Rust, Cargo, and Clippy.
Many people came together to create Rust 1.73.0. We couldn't have done it without all of you. Thanks!
As of Rust 1.74 (to be released on November 16th, 2023), the minimum version of Apple's platforms (iOS, macOS, and tvOS) that the Rust toolchain supports will be increased to newer minimums. These changes affect both the Rust compiler itself (rustc
), other host tooling, and most importantly, the standard library and any binaries produced that use it. With these changes in place, any binaries produced will stop loading on older versions or exhibit other, unspecified, behavior.
The new minimum versions are now:
If your application does not target or support macOS 10.7-10.11 or iOS 7-9 already these changes most likely do not affect you.
The following contains each affected target, and the comprehensive effects on it:
x86_64-apple-darwin
(Minimum OS raised)aarch64-apple-ios
(Minimum OS raised)aarch64-apple-ios-sim
(Minimum iOS and macOS version raised.)x86_64-apple-ios
(Minimum iOS and macOS version raised. This is also a simulator target.)aarch64-apple-tvos
(Minimum OS raised)armv7-apple-ios
(Target removed. The oldest iOS 10-compatible device uses ARMv7s.)armv7s-apple-ios
(Minimum OS raised)i386-apple-ios
(Minimum OS raised)i686-apple-darwin
(Minimum OS raised)x86_64-apple-tvos
(Minimum tvOS and macOS version raised. This is also a simulator target.)From these changes, only one target has been removed entirely: armv7-apple-ios
. It was a tier 3 target.
Note that Mac Catalyst and M1/M2 (aarch64
) Mac targets are not affected, as their minimum OS version already has a higher baseline. Refer to the Platform Support Guide for more information.
These changes remove support for multiple older mobile devices (iDevices) and many more Mac systems. Thanks to @madsmtm
for compiling the list.
As of this update, the following device models are no longer supported by the latest Rust toolchain:
A total of 27 Mac system models, released between 2007 and 2009, are no longer supported.
The affected systems are not comprehensively listed here, but external resources exist which contain lists of the exact models. They can be found from Apple and Yama-Mac, for example.
The third generation AppleTV (released 2012-2013) is no longer supported.
Prior to now, Rust claimed support for very old Apple OS versions, but many never even received passive testing or support. This is a rough place to be for a toolchain, as it hinders opportunities for improvement in exchange for a support level many people, or everyone, will never utilize. For Apple's mobile platforms, many of the old versions are now even unable to receive new software due to App Store publishing restrictions.
Additionally, the past two years have clearly indicated that Apple, which has tight control over toolchains for these targets, is making it difficult-to-impossible to support them anymore. As of XCode 14, last year's toolchain release, building for many old OS versions became unsupported. XCode 15 continues this trend. After enough time, continuing to use an older toolchain can even lead to breaking build issues for others.
We want Rust to be a first-class option for developing software for and on Apple's platforms, but to continue this goal we have to set an easier, and more realistic compatibility baseline. The new requirements were determined after surveying what Apple and third-party statistics are available to us and picking a middle ground that balances compatibility with Rusts's needs and limitations.
If you or an application you develop are affected by this change, there are different options which may be helpful:
If your project does not directly support a specific version, but instead depends on a default previously used by Rust, there are some steps you can take to help improve. For example, a number of crates in the ecosystem have hardcoded Rust's default support versions since they haven't changed for a long time:
cc
crate to include build languages into your project, a future update will handle this transparently.rustc --print deployment-target
option for a default, or user-set, value on toolchains using Rust 1.71 or newer going forward. Hardcoded defaults should only be used for older toolchains where this is unavailable.Around the end of July the crates.io team opened an RFC to update the current crates.io usage policies. This policy update addresses operational concerns of the crates.io community service that have arisen since the last significant policy update in 2017, particularly related to name squatting and spam. The RFC has caused considerable discussion, and most of the suggested improvements have since been integrated into the proposal.
At the last team meeting the crates.io team decided to move the RFC forward and start the final comment period process.
We have been made aware by a couple of community members though that the RFC might not have been visible enough in the Rust community. We hope that this blog post changes that.
We invite you all to review the RFC and let us know if there are still any major concerns with these proposed policies.
Here is a quick TL;DR:
Finally, if you have any comments, please open threads on the RFC diff, instead of using the main comment box, to keep the discussion more structured. Thank you!
The Rust team has published a new point release of Rust, 1.72.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.72.1 is as easy as:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website.
1.72.1 resolves a few regressions introduced in 1.72.0:
Many people came together to create Rust 1.72.1. We couldn't have done it without all of you. Thanks!
Today we are launching the process to elect new Project Directors to the Rust Foundation Board of Directors. As we begin the process, we wanted to spend some time explaining the goals and procedures we will follow. We will summarize everything here, but if you would like to you can read the official process documentation.
We ask all project members to begin working with their Leadership Council representative to nominate potential Project Directors. See the Candidate Gathering section for more details. Nominations are due by September 15, 2023.
The Rust Foundation Board of Directors has five seats reserved for Project Directors. These Project Directors serve as representatives of the Rust project itself on the Board. Like all Directors, the Project Directors are elected by the entity they represent, which in the case of the Rust Project means they are elected by the Rust Leadership Council. Project Directors serve for a term of two years and will have staggered terms. This year we will appoint two new directors and next year we will appoint three new directors.
The current project directors are Jane Losare-Lusby, Josh Stone, Mark Rousskov, Ryan Levick and Tyler Mandry. This year, Jane Losare-Lusby and Josh Stone will be rotating out of their roles as Project Directors, so the current elections are to fill their seats. We are grateful for the work the Jane and Josh have put in during their terms as Project Directors!
We want to make sure the Project Directors can effectively represent the project as a whole, so we are soliciting input from the whole project. The elections process will go through two phases: Candidate Gathering and Election. Read on for more detail about how these work.
The first phase is beginning right now. In this phase, we are inviting the members of all of the top level Rust teams and their subteams to nominate people who will make good project directors. The goal is to bubble these up to the Council through each of the top-level teams. You should be hearing from your Council Representative soon with more details, but if not, feel free to reach out to them directly.
Each team is encouraged to suggest candidates. Since we are electing two new directors, it would be ideal for teams to nominate at least two candidates. Nominees can be anyone in the project and do not have to be a member of the team who nominates them.
The candidate gathering process will be open until September 15, at which point each team's Council Representative will share their team's nominations and reasoning with the whole Leadership Council. At this point, the Council will confirm with each of the nominees that they are willing to accept the nomination and fill the role of Project Director. Then the Council will publish the set of candidates.
This then starts a ten day period where members of the Rust Project are invited to share feedback on the nominees with the Council. This feedback can include reasons why a nominee would make a good project director, or concerns the Council should be aware of.
The Council will announce the set of nominees by September 19 and the ten day feedback period will last until September 29. Once this time has passed, we will move on to the election phase.
The Council will meet during the week of October 1 to complete the election process. In this meeting we will discuss each candidate and once we have done this the facilitator will propose a set of two of them to be the new Project Directors. The facilitator puts this to a vote, and if the Council unanimously agrees with the proposed pair of candidates then the process is completed. Otherwise, we will give another opportunity for council members to express their objections and we will continue with another proposal. This process repeats until we find two nominees who the Council can unanimously consent to. The Council will then confirm these nominees through an official vote.
Once this is done, we will announce the new Project Directors. In addition, we will contact each of the nominees, including those who were not elected, to tell them a little bit more about what we saw as their strengths and opportunities for growth to help them serve better in similar roles in the future.
This process will continue through all of September and into October. Below are the key dates:
After the election meeting happens, the Rust Leadership Council will announce the results and the new Project Directors will assume their responsibilities.
A number of people have been involved in designing and launching this election process and we wish to extend a heartfelt thanks to all of them! We'd especially like to thank the members of the Project Director Election Proposal Committee: Jane Losare-Lusby, Eric Holk, and Ryan Levick. Additionally, many members of the Rust Community have provided feedback and thoughtful discussions that led to significant improvements to the process. We are grateful for all of your contributions.
For years, the Cargo team has encouraged Rust developers tocommit their Cargo.lock file for packages with binaries but not libraries. We now recommend peopledo what is best for their project. To help people make a decision, we do include some considerations and suggest committing Cargo.lock
as a starting point in their decision making. To align with that starting point, cargo new
will no longer ignoreCargo.lock
for libraries as of nightly-2023-08-24. Regardless of what decision projects make, we encourage regulartesting against their latest dependencies.
The old guidelines ensured libraries tested their latest dependencies which helped us keep quality high within Rust's package ecosystem by ensuring issues, especially backwards compatibility issues, were quickly found and addressed. While this extra testing was not exhaustive, We believe it helped foster a culture of quality in this nascent ecosystem.
This hasn't been without its downsides though. This has removed an important piece of history from code bases, making bisecting to find the root cause of a bug harder for maintainers. For contributors, especially newer ones, this is another potential source of confusion and frustration from an unreliable CI whenever a dependency is yanked or a new release contains a bug.
A lot as changed for Rust since the guideline was written. Rust has shifted from being a language for early adopters to being more mainstream, and we need to be mindful of the on-boarding experience of these new-to-Rust developers. Also with this wider adoption, it isn't always practical to assume everyone is using the latest Rust release and the community has been working through how to manage support for minimum-supported Rust versions (MSRV). Part of this is maintaining an instance of your dependency tree that can build with your MSRV. A lockfile is an appropriate way to pin versions for your project so you can validate your MSRV but we found people were instead putting upperbounds on their version requirements due to the strength of our prior guideline despitelikely being a worse solution.
The wider software development ecosystem has also changed a lot in the intervening time. CI has become easier to setup and maintain. We also have products likeDependabotandRenovate. This has opened up options besides having version control ignore Cargo.lock
to test newer dependencies. Developers could have a scheduled job that first runs cargo update
. They could also have bots regularly update their Cargo.lock
in PRs, ensuring they pass CI before being merged.
Since there isn't a universal answer to these situations, we felt it was best to leave the choice to developers and give them information they need in making a decision. For feedback on this policy change, see rust-lang/cargo#8728. You can also reach out the the Cargo team more generally onZulip.
The Rust team is happy to announce a new version of Rust, 1.72.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.72.0 with:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.72.0 on GitHub.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Please report any bugs you might come across!
cfg
-disabled items in errorsYou can conditionally enable Rust code using cfg
, such as to provide certain functions only with certain crate features, or only on particular platforms. Previously, items disabled in this way would be effectively invisible to the compiler. Now, though, the compiler will remember the name and cfg
conditions of those items, so it can report (for example) if a function you tried to call is unavailable because you need to enable a crate feature.
Compiling my-project v0.1.0 (/tmp/my-project)
error[E0432]: unresolved import `rustix::io_uring`
--> src/main.rs:1:5
|
1 | use rustix::io_uring;
| ^^^^^^^^^^^^^^^^ no `io_uring` in the root
|
note: found an item that was configured out
--> /home/username/.cargo/registry/src/index.crates.io-6f17d22bba15001f/rustix-0.38.8/src/lib.rs:213:9
|
213 | pub mod io_uring;
| ^^^^^^^^
= note: the item is gated behind the `io_uring` feature
For more information about this error, try `rustc --explain E0432`.
error: could not compile `my-project` (bin "my-project") due to previous error
To prevent user-provided const evaluation from getting into a compile-time infinite loop or otherwise taking unbounded time at compile time, Rust previously limited the maximum number of statements run as part of any given constant evaluation. However, especially creative Rust code could hit these limits and produce a compiler error. Worse, whether code hit the limit could vary wildly based on libraries invoked by the user; if a library you invoked split a statement into two within one of its functions, your code could then fail to compile.
Now, you can do an unlimited amount of const evaluation at compile time. To avoid having long compilations without feedback, the compiler will always emit a message after your compile-time code has been running for a while, and repeat that message after a period that doubles each time. By default, the compiler will also emit a deny-by-default lint (const_eval_long_running
) after a large number of steps to catch infinite loops, but you canallow(const_eval_long_running)
to permit especially long const evaluation.
Several lints from Clippy have been pulled into rustc
:
ManuallyDrop
does not drop its inner value, so calling std::mem::drop
on it does nothing. Instead, the lint will suggest ManuallyDrop::into_inner
first, or you may use the unsafe ManuallyDrop::drop
to run the destructor in-place. This lint is denied by default.std::str::from_utf8_unchecked
and std::str::from_utf8_unchecked_mut
with an invalid UTF-8 literal, which violates their safety pre-conditions, resulting in undefined behavior. This lint is denied by default.std::str::from_utf8
and std::str::from_utf8_mut
with an invalid UTF-8 literal, which will always return an error. This lint is a warning by default.f32::NAN
or f64::NAN
as one of the operands. NaN does not compare meaningfully to anything – not even itself – so those comparisons are always false. This lint is a warning by default, and will suggest calling the is_nan()
method instead.&T
to &mut T
without using interior mutability, which is immediate undefined behavior, even if the reference is unused. This lint is currently allowed by default due to potential false positives, but it is planned to be denied by default in 1.73 after implementation improvements.These APIs are now stable in const contexts:
Check out everything that changed in Rust, Cargo, and Clippy.
In a future release we're planning to increase the minimum supported Windows version to 10. The accepted proposal in compiler MCP 651 is that Rust 1.75 will be the last to officially support Windows 7, 8, and 8.1. When Rust 1.76 is released in February 2024, only Windows 10 and later will be supported as tier-1 targets. This change will apply both as a host compiler and as a compilation target.
Many people came together to create Rust 1.72.0. We couldn't have done it without all of you. Thanks!
Hello, Rustaceans!
For the 6th year in a row, the Rust Project conducted a survey on the Rust programming language, with participation from project maintainers, contributors, and those generally interested in the future of Rust. This edition of the annual State of Rust Survey opened for submissions on December 5 and ran until December 22, 2022.
First, we'd like to thank you for your patience on these long delayed results. We hope to identify a more expedient and sustainable process going forward so that the results come out more quickly and have even more actionable insights for the community.
The goal of this survey is always to give our wider community a chance to express their opinions about the language we all love and help shape its future. We’re grateful to those of you who took the time to share your voice on the state of Rust last year.
Before diving into a few highlights, we would like to thank everyone who was involved in creating the State of Rust survey with special acknowledgment to the translators whose work allowed us to offer the survey in English, Simplified Chinese, Traditional Chinese, French, German, Japanese, Korean, Portuguese, Russian, Spanish, and Ukrainian.
In 2022, we had 9,433 total survey completions and an increased survey completion rate of 82% vs. 76% in 2021. While the goal is always total survey completion for all participants, the survey requires time, energy, and focus – we consider this figure quite high and were pleased by the increase.
We also saw a significant increase in the number of people viewing but not participating in the survey (from 16,457 views in 2021 to 25,581 – a view increase of over 55%). While this is likely due to a number of different factors, we feel this information speaks to the rising interest in Rust and the growing general audience following its evolution.
In 2022, the survey had 11,482 responses, which is a slight decrease of 6.4% from 2021, however, the number of respondents that answered all survey questions has increased year over year. We were interested to see this slight decrease in responses, as this year’s survey was much shorter than in previous years – clearly, survey length is not the only factor driving participation.
We were pleased to offer the survey in 11 languages – more than ever before, with the addition of a Ukrainian translation in 2022. 77% of respondents took this year’s survey in English, 5% in Chinese (simplified), 4% in German and French, 2% in Japanese, Spanish, and Russian, and 1% in Chinese (traditional), Korean, Portuguese, and Ukrainian. This is our lowest percentage of respondents taking the survey in English to date, which is an exciting indication of the growing global nature of our community!
The vast majority of our respondents reported being most comfortable communicating on technical topics in English (93%), followed by Chinese (7%).
Rust user respondents were asked which country they live in. The top 13 countries represented were as follows: United States (25%), Germany (12%), China (7%), United Kingdom (6%), France (5%), Canada (4%), Russia (4%), Japan (3%), Netherlands (3%), Sweden (2%), Australia (2%), Poland (2%), India (2%). Nearly 72.5% of respondents elected to answer this question.
While we see global access to Rust education as a critical goal for our community, we are proud to say that Rust was used all over the world in 2022!
More people are using Rust than ever before! Over 90% of survey respondents identified as Rust users, and of those using Rust, 47% do so on a daily basis – an increase of 4% from the previous year.
30% of Rust user respondents can write simple programs in Rust, 27% can write production-ready code, and 42% consider themselves productive using Rust.
Of the former Rust users who completed the survey, 30% cited difficulty as the primary reason for giving up while nearly 47% cited factors outside of their control.
Similarly, 26% of those who did not identify as Rust users cited the perception of difficulty as the primary reason for not having used it, (with 62% reporting that they simply haven’t had the chance to prioritize learning Rust yet).
The growing maturation of Rust can be seen through the increased number of different organizations utilizing the language in 2022. In fact, 29.7% of respondents stated that they use Rust for the majority of their coding work at their workplace, which is a 51.8% increase compared to the previous year.
There are numerous reasons why we are seeing increased use of Rust in professional environments. Top reasons cited for the use of Rust include the perceived ability to write "bug-free software" (86%), Rust's performance characteristics (84%), and Rust's security and safety guarantees (69%). We were also pleased to find that 76% of respondents continue to use Rust simply because they found it fun and enjoyable. (Respondents could select more than one option here, so the numbers don't add up to 100%.)
Of those respondents that used Rust at work, 72% reported that it helped their team achieve its goals (a 4% increase from the previous year) and 75% have plans to continue using it on their teams in the future.
But like any language being applied in the workplace, Rust’s learning curve is an important consideration; 39% of respondents using Rust in a professional capacity reported the process as “challenging” and 9% of respondents said that adopting Rust at work has “slowed down their team”. However, 60% of productive users felt Rust was worth the cost of adoption overall.
It is exciting to see the continued growth of professional Rust usage and the confidence so many users feel in its performance, control, security and safety, enjoyability, and more!
A key goal of the State of Rust survey is to shed light on challenges, concerns, and priorities Rustaceans are currently sitting with.
Of those respondents who shared their main worries for the future of Rust, 26% have concerns that the developers and maintainers behind Rust are not properly supported – a decrease of more than 30% from the previous year’s findings. One area of focus in the future may be to see how the Project in conjunction with the Rust Foundation can continue to push that number towards 0%.
While 38% have concerns about Rust “becoming too complex”, only a small number of respondents were concerned about documentation, corporate oversight, or speed of evolution. 34% of respondents are not worried about the future of Rust at all.
This year’s survey reflects a 21% decrease in fears about Rust’s usage in the industry since the last survey. Faith in Rust’s staying power and general utility is clearly growing as more people find Rust and become lasting members of the community. As always, we are grateful for your honest feedback and dedication to improving this language for everyone.
To quote an anonymous survey respondent, “Thanks for all your hard work making Rust awesome!” – Rust wouldn’t exist or continue to evolve for the better without the many Project members and the wider Rust community. Thank you to those who took the time to share their thoughts on the State of Rust in 2022!
The Rust team has published a new point release of Rust, 1.71.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.71.1 is as easy as:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website.
Rust 1.71.1 fixes Cargo not respecting the umask when extracting dependencies, which could allow a local attacker to edit the cache of extracted source code belonging to another local user, potentially executing code as another user. This security vulnerability is tracked as CVE-2023-38497, and you can read more about it on the advisory we published earlier today. We recommend all users to update their toolchain as soon as possible.
Rust 1.71.1 also addresses several regressions introduced in Rust 1.71.0, including bash completion being broken for users of Rustup, and thesuspicious_double_ref_op
being emitted when calling borrow()
even though it shouldn't.
You can find more detailed information on the specific regressions, and other minor fixes, in the release notes.
Many people came together to create Rust 1.71.1. We couldn't have done it without all of you. Thanks!
This is a cross-post of the official security advisory. The official advisory contains a signed version with our PGP key, as well.
The Rust Security Response WG was notified that Cargo did not respect the umask when extracting crate archives on UNIX-like systems. If the user downloaded a crate containing files writeable by any local user, another local user could exploit this to change the source code compiled and executed by the current user.
This vulnerability has been assigned CVE-2023-38497.
In UNIX-like systems, each file has three sets of permissions: for the user owning the file, for the group owning the file, and for all other local users. The "umask" is configured on most systems to limit those permissions during file creation, removing dangerous ones. For example, the default umask on macOS and most Linux distributions only allow the user owning a file to write to it, preventing the group owning it or other local users from doing the same.
When a dependency is downloaded by Cargo, its source code has to be extracted on disk to allow the Rust compiler to read as part of the build. To improve performance, this extraction only happens the first time a dependency is used, caching the pre-extracted files for future invocations.
Unfortunately, it was discovered that Cargo did not respect the umask during extraction, and propagated the permissions stored in the crate archive as-is. If an archive contained files writeable by any user on the system (and the system configuration didn't prevent writes through other security measures), another local user on the system could replace or tweak the source code of a dependency, potentially achieving code execution the next time the project is compiled.
All Rust versions before 1.71.1 on UNIX-like systems (like macOS and Linux) are affected. Note that additional system-dependent security measures configured on the local system might prevent the vulnerability from being exploited.
Users on Windows and other non-UNIX-like systems are not affected.
We recommend all users to update to Rust 1.71.1, which will be released later today, as it fixes the vulnerability by respecting the umask when extracting crate archives. If you build your own toolchain, patches for 1.71.0 source tarballs are available here.
To prevent existing cached extractions from being exploitable, the Cargo binary included in Rust 1.71.1 or later will purge the caches it tries to access if they were generated by older Cargo versions.
If you cannot update to Rust 1.71.1, we recommend configuring your system to prevent other local users from accessing the Cargo directory, usually located in ~/.cargo
:
chmod go= ~/.cargo
We want to thank Addison Crump for responsibly disclosing this to us according to the Rust security policy.
We also want to thank the members of the Rust project who helped us disclose the vulnerability: Weihang Lo for developing the fix; Eric Huss for reviewing the fix; Pietro Albini for writing this advisory; Pietro Albini, Manish Goregaokar and Josh Stone for coordinating this disclosure; Josh Triplett, Arlo Siemen, Scott Schafer, and Jacob Finkelman for advising during the disclosure.
The Rust team is happy to announce a new version of Rust, 1.71.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.71.0 with:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.71.0 on GitHub.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Please report any bugs you might come across!
1.71.0 stabilizes C-unwind
(and other -unwind
suffixed ABI variants1).
The behavior for unforced unwinding (the typical case) is specified in this table from the RFC which proposed this feature. To summarize:
Each ABI is mostly equivalent to the same ABI without -unwind
, except that with -unwind
the behavior is defined to be safe when an unwinding operation (panic
or C++ style exception) crosses the ABI boundary. For panic=unwind
, this is a valid way to let exceptions from one language unwind the stack in another language without terminating the process (as long as the exception is caught in the same language from which it originated); for panic=abort
, this will typically abort the process immediately.
For this initial stabilization, no change is made to the existing ABIs (e.g."C"
), and unwinding across them remains undefined behavior. A future Rust release will amend these ABIs to match the behavior specified in the RFC as the final part in stabilizing this feature (usually aborting at the boundary). Users are encouraged to start using the new unwind ABI variants in their code to remain future proof if they need to unwind across the ABI boundary.
1.71.0 stabilizes support for a new attribute, #[debug_visualizer(natvis_file = "...")]
and #[debug_visualizer(gdb_script_file = "...")]
, which allows embedding Natviz descriptions and GDB scripts into Rust libraries to improve debugger output when inspecting data structures created by those libraries. Rust itself has packaged similar scripts for some time for the standard library, but this feature makes it possible for library authors to provide a similar experience to end users.
See the referencefor details on usage.
On Windows platforms, Rust now supports using functions from dynamic libraries without requiring those libraries to be available at build time, using the new kind="raw-dylib”
option for #[link]
.
This avoids requiring users to install those libraries (particularly difficult for cross-compilation), and avoids having to ship stub versions of libraries in crates to link against. This simplifies crates providing bindings to Windows libraries.
Rust also supports binding to symbols provided by DLLs by ordinal rather than named symbol, using the new #[link_ordinal]
attribute.
As previously announced, Rust 1.71 updates the musl version to 1.2.3. Most users should not be affected by this change.
Rust 1.59.0 stabilized const
initialized thread local support in the standard library, which allows for more optimal code generation. However, until now this feature was missed in release notes anddocumentation. Note that this stabilization does not make const { ... }
a valid expression or syntax in other contexts; that is a separate and currently unstablefeature.
use std::cell::Cell;
thread_local! {
pub static FOO: Cell<u32> = const { Cell::new(1) };
}
These APIs are now stable in const contexts:
Check out everything that changed in Rust, Cargo, and Clippy.
Many people came together to create Rust 1.71.0. We couldn't have done it without all of you. Thanks!
The regex sub-team is announcing the release of regex 1.9
. The regex
crate is maintained by the Rust project and is the recommended way to use regular expressions in Rust. Its defining characteristic is its guarantee of worst case linear time searches with respect to the size of the string being searched.
Releases of the regex
crate aren't normally announced on this blog, but since the majority of its internals have been rewritten in version 1.9, this announcement serves to encourage extra scrutiny. If you run into any problems or performance regressions, please report them on the issue tracker or ask questions on the Discussion forum.
Few API additions have been made, but one worth calling out is theCaptures::extract method that should make getting capture groups in some cases more convenient. Otherwise, the main change folks should see is hopefully faster search times.
You can read more in the CHANGELOG and in a more in depth blog post onregex crate internals as a library.
Rustfmt will add support for formatting let-else statements starting with the nightly 2023-07-02 toolchain, and then let-else formatting support should come to stable Rust as part of the 1.72 release.
let-else statements were stabilized back in 2022 as part of the 1.65.0 release. However, the current and previous versions of Rustfmt did not have formatting support for let-else statements. When Rustfmt encountered a let-else statement it would leave it alone and maintain the manual styling originally authored by the developer.
After updating to one of the toolchains with let-else formatting support, you may notice that cargo fmt
/rustfmt
invocations want to "change" the formatting of your let-else statements. However, this isn't actually a "change" in formatting, but instead is simply Rustfmt applying the let-else formatting rules for the very first time.
Rustfmt support for let-else statements has been a long standing request, and the Project has taken a number of steps to prevent a recurrence of the delay between feature stabilization and formatting support, as well as putting additional procedures in place which should enable more expeditious formatting support for nightly-only syntax.
Rust has an official Style Guide that articulates the default formatting style for Rust code. The Style Guide functions as a specification that defines the default formatting behavior for Rustfmt, and Rustfmt's primary mission is to provide automated formatting capabilities based around that Style Guide specification. Rustfmt is a direct consumer of the Style Guide, but Rustfmt does not unilaterally dictate what the default formatting style of language constructs should be.
The initial Style Guide was developed many years ago (beginning in 2016), and was driven by a Style Team in collaboration with the community through an RFC process. The Style Guide was then made official in 2018 via RFC 2436.
That initial Style Team was more akin to a Project Working Group in today's terms, as they had a fixed scope with a main goal to simply pull together the initial Style Guide. Accordingly that initial Style Team was disbanded once the Guide was made official.
There was subsequently no designated group within the Rust Project that was explicitly responsible for the Style Guide, and no group explicitly focused on determining the official Style for new language constructs.
The absence of a team/group with ownership of the Style Guide didn't really cause problems at first, as the new syntax that came along during the first few years was comparatively non-controversial when it came to default style and formatting. However, over time challenges started to develop when there was increasingly less community consensus and no governing team within the Project to make the final decision about how new language syntax should be styled.
This was certainly the case with let-else statements, with lots of varying perspectives on how they should be styled. Without any team/group to make the decision and update the Style Guide with the official rules for let-else statements, Rustfmt was blocked and was unable to proceed.
These circumstances around let-else statements resulted in a greater understanding across the Project of the need to establish a team to own and maintain the Style Guide. However, it was also well understood that spinning up a new team and respective processes would take some time, and the decision was made to not block the stabilization of features that were otherwise fully ready to be stabilized, like let-else statements, in the nascency of such a new team and new processes.
Accordingly, let-else statements were stabilized and released without formatting support and with an understanding that the new Style Team and then subsequently the Rustfmt Team would later complete the requisite work required to incorporate formatting support.
A number of steps have been taken to improve matters in this space. This includes steps to address the aforementioned issues and deal with some of the "style debt" that accrued over the years in the absence of a Style Team, and also to establish new processes and mechanisms to bring about other formatting/styling improvements.
Furthermore, the Style Team is also continuing to diligently work through the backlog of those "style debt" items, and the Rustfmt team is in turn actively working on respective formatting implementation. The Rustfmt team is also focused on growing the team in order to improve contributor and review capacity.
We know that many have wanted let-else formatting support for a while, and we're sorry it's taken this long. We also recognize that Rustfmt now starting to format let-else statements may cause some formatting churn, and that's a highly undesirable scenario we strive to avoid.
However, we believe the benefits of delivering let-else formatting support outweigh those drawbacks. While it's possible there may be another future case or two where we have to do something similar as we work through the style backlog, we're hopeful that over time this new team and these new processes will reduce (or eliminate) the possibility of a recurrence by addressing the historical problems that played such an outsize role in the let-else delay, and also bring about various other improvements.
Both the Style and Rustfmt teams hang out on Zulip so if you'd like to get more involved or have any questions please drop by on T-Style and/or T-Rustfmt.
If you recently generated a new API token on crates.io, you might have noticed our new API token creation page and some of the new features it now supports.
Previously, when clicking the "New Token" button on https://crates.io/settings/tokens, you were only provided with the option to choose a token name, without any additional choices. We knew that we wanted to offer our users more flexibility, but in the previous user interface that would have been difficult, so our first step was to build a proper "New API Token" page.
Our roadmap included two essential features known as "token scopes". The first of them allows you to restrict API tokens to specific operations. For instance, you can configure a token to solely enable the publishing of new versions for existing crates, while disallowing the creation of new crates. The second one offers an optional restriction where tokens can be limited to only work for specific crate names. If you want to read more about how these features were planned and implemented you can take a look at our correspondingtracking issue.
To further enhance the security of crates.io API tokens, we prioritized the implementation of expiration dates. Since we had already touched most of the token-related code this was relatively straight-forward. We are delighted to announce that our "New API Token" page now supports endpoint scopes, crate scopes and expiration dates:
Similar to the API token creation process on github.com, you can choose to not have any expiration date, use one of the presets, or even choose a custom expiration date to suit your requirements.
If you come across any issues or have questions, feel free to reach out to us onZulipor open an issue on GitHub.
Lastly, we, the crates.io team, would like to express our gratitude to theOpenSSF's Alpha-Omega Initiativeand JFrogfor their contributions to the Rust Foundationsecurity initiative. Their support has been instrumental in enabling us to implement these features and undertake extensive security-related work on the crates.io codebase over the past few months.
As of today, RFC 3392 has been merged, forming the new top level governance body of the Rust Project: the Leadership Council. The creation of this Council marks the end of both the Core Team and the interim Leadership Chat.
The Council will assume responsibility for top-level governance concerns while most of the responsibilities of the Rust Project (such as maintenance of the compiler and core tooling, evolution of the language and standard libraries, administration of infrastructure, etc.) remain with the nine top level teams.
Each of these top level teams, as defined in the RFC, has chosen a representative who collectively form the Council:
First, we want to take a moment to thank the Core Team and interim Leadership Chat for the hard work they've put in over the years. Their efforts have been critical for the Rust Project. However, we do recognize that the governance of the Rust Project has had its shortcomings. We hope to build on the successes and improve upon the failures to ultimately lead to greater transparency and accountability.
We know that there is a lot of work to do and we are eager to get started. In the coming weeks we will be establishing the basic infrastructure for the group, including creating a plan for regular meetings and a process for raising agenda items, setting up a team repository, and ultimately completing the transition from the former Rust leadership structures.
We will post more once this bootstrapping process has been completed.
The Rust team is happy to announce a new version of Rust, 1.70.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.70.0 with:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.70.0 on GitHub.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Please report any bugs you might come across!
Cargo's "sparse" protocol is now enabled by default for reading the index from crates.io. This feature was previously stabilized with Rust 1.68.0, but still required configuration to use that with crates.io. The announced plan was to make that the default in 1.70.0, and here it is!
You should see substantially improved performance when fetching information from the crates.io index. Users behind a restrictive firewall will need to ensure that access to https://index.crates.io
is available. If for some reason you need to stay with the previous default of using the git index hosted by GitHub, the registries.crates-io.protocol config setting can be used to change the default.
One side-effect to note about changing the access method is that this also changes the path to the crate cache, so dependencies will be downloaded anew. Once you have fully committed to using the sparse protocol, you may want to clear out the old $CARGO_HOME/registry/*/github.com-*
paths.
OnceCell
and OnceLock
Two new types have been stabilized for one-time initialization of shared data, OnceCell
and its thread-safe counterpart OnceLock
. These can be used anywhere that immediate construction is not wanted, and perhaps not even possible like non-const
data in global variables.
use std::sync::OnceLock;
static WINNER: OnceLock<&str> = OnceLock::new();
fn main() {
let winner = std::thread::scope(|s| {
s.spawn(|| WINNER.set("thread"));
std::thread::yield_now(); // give them a chance...
WINNER.get_or_init(|| "main")
});
println!("{winner} wins!");
}
Crates such as lazy_static
and once_cell
have filled this need in the past, but now these building blocks are part of the standard library, ported from once_cell
's unsync
and sync
modules. There are still more methods that may be stabilized in the future, as well as companion LazyCell
and LazyLock
types that store their initializing function, but this first step in stabilization should already cover many use cases.
IsTerminal
This newly-stabilized trait has a single method, is_terminal
, to determine if a given file descriptor or handle represents a terminal or TTY. This is another case of standardizing functionality that existed in external crates, like atty
and is-terminal
, using the C library isatty
function on Unix targets and similar functionality elsewhere. A common use case is for programs to distinguish between running in scripts or interactive modes, like presenting colors or even a full TUI when interactive.
use std::io::{stdout, IsTerminal};
fn main() {
let use_color = stdout().is_terminal();
// if so, add color codes to program output...
}
The -Cdebuginfo
compiler option has previously only supported numbers 0..=2 for increasing amounts of debugging information, where Cargo defaults to 2 in dev and test profiles and 0 in release and bench profiles. These debug levels can now be set by name: "none" (0), "limited" (1), and "full" (2), as well as two new levels, "line-directives-only" and "line-tables-only".
The Cargo and rustc documentation both called level 1 "line tables only" before, but it was more than that with information about all functions, just not types and variables. That level is now called "limited", and the new "line-tables-only" level is further reduced to the minimum needed for backtraces with filenames and line numbers. This may eventually become the level used for -Cdebuginfo=1
. The other line-directives-only
level is intended for NVPTX profiling, and is otherwise not recommended.
Note that these named options are not yet available to be used via Cargo.toml
. Support for that will be available in the next release 1.71.
test
CLIWhen #[test]
functions are compiled, the executable gets a command-line interface from the test
crate. This CLI has a number of options, including some that are not yet stabilized and require specifying -Zunstable-options
as well, like many other commands in the Rust toolchain. However, while that's only intended to be allowed in nightly builds, that restriction wasn't active in test
-- until now. Starting with 1.70.0, stable and beta builds of Rust will no longer allow unstable test
options, making them truly nightly-only as documented.
There are known cases where unstable options may have been used without direct user knowledge, especially --format json
used in IntelliJ Rust and other IDE plugins. Those projects are already adjusting to this change, and the status of JSON output can be followed in its tracking issue.
Check out everything that changed in Rust, Cargo, and Clippy.
Many people came together to create Rust 1.70.0. We couldn't have done it without all of you. Thanks!
On May 26th 2023, JeanHeyd Meneide announced they would not speak at RustConf 2023 anymore. They were invited to give a keynote at the conference, only to be told two weeks later the keynote would be demoted to a normal talk, due to a decision made within the Rust project leadership.
That decision was not right, and first off we want to publicly apologize for the harm we caused. We failed you JeanHeyd. The idea of downgrading a talk after the invitation was insulting, and nobody in leadership should have been willing to entertain it.
Everyone in leadership chat is still working to fully figure out everything that went wrong and how we can prevent all of this from happening again. That work is not finished yet. Still, we want to share some steps we are taking to reduce the risk of something like this from happening again.
The primary causes of the failure were the decision-making and communication processes of leadership chat. Leadership chat has been the top-level governance structure created after the previous Moderation Team resigned in late 2021. It’s made of all leads of top-level teams, all members of the Core Team, all project directors on the Rust Foundation board, and all current moderators. This leadership chat was meant as a short-term solution and lacked clear rules and processes for decision making and communication. This left a lot of room for misunderstandings about when a decision had actually been made and when individuals were speaking for the project versus themselves.
In this post we focus on the organizational and process failure, leaving room for individuals to publicly acknowledge their own role. Nonetheless, formal rules or governance processes should not be required to identify that demoting JeanHeyd’s keynote was the wrong thing to do. The fact is that several individuals exercised poor judgment and poor communication. Recognizing their outsized role in the situation, those individuals have opted to step back from top-level governance roles, including leadership chat and the upcoming leadership council.
Organizationally, within leadership chat we will enforce a strict consensus rule for all decision making, so that there is no longer ambiguity of whether something is an individual opinion or a group decision. We are going to launch the new governance council as soon as possible. We’ll assist the remaining teams to select their representatives in a timely manner, so that the new governance council can start and the current leadership chat can disband.
We wish to close the post by reiterating our apology to JeanHeyd, but also the wider Rust community. You deserved better than you got from us.
-- The members of leadership chat
Beginning with Rust 1.71 (slated for stable release on 2023-07-13), the various *-linux-musl
targets will ship with musl 1.2.3. These targets currently use musl 1.1.24. While musl 1.2.3 introduces some new features, most notably 64-bit time on all platforms, it is ABI compatible with earlier musl versions.
As such, this change is unlikely to affect you.
The following targets will be updated:
Target
aarch64-unknown-linux-musl
Tier 2 with Host Tools
x86_64-unknown-linux-musl
Tier 2 with Host Tools
arm-unknown-linux-musleabi
Tier 2
arm-unknown-linux-musleabihf
Tier 2
armv5te-unknown-linux-musleabi
Tier 2
armv7-unknown-linux-musleabi
Tier 2
armv7-unknown-linux-musleabihf
Tier 2
i586-unknown-linux-musl
Tier 2
i686-unknown-linux-musl
Tier 2
mips-unknown-linux-musl
Tier 2
mips64-unknown-linux-muslabi64
Tier 2
mips64el-unknown-linux-muslabi64
Tier 2
mipsel-unknown-linux-musl
Tier 2
hexagon-unknown-linux-musl
Tier 3
mips64-openwrt-linux-musl
Tier 3
powerpc-unknown-linux-musl
Tier 3
powerpc64-unknown-linux-musl
Tier 3
powerpc64le-unknown-linux-musl
Tier 3
riscv32gc-unknown-linux-musl
Tier 3
riscv64gc-unknown-linux-musl
Tier 3
s390x-unknown-linux-musl
Tier 3
thumbv7neon-unknown-linux-musleabihf
Tier 3
Note: musl 1.2.3 does not raise the minimum required Linux kernel version for any target.
libc
crate on 32-bit targets?No, the musl project made this change carefully preserving ABI compatibility. The libc
crate will continue to function correctly without modification.
A future version of the libc
crate will update the definitions of time-related structures and functions to be 64-bit on all musl targets however this is blocked on the musl targets themselves first being updated. At present, there is no anticipated date when this change will take place and care will be taken to help the Rust ecosystem transition successfully to the updated time-related definitions.
The rustup working group is happy to announce the release of rustup version 1.26.0. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of rustup installed, getting rustup 1.26.0 is as easy as stopping any programs which may be using Rustup (e.g. closing your IDE) and running:
rustup self update
Rustup will also automatically update itself at the end of a normal toolchain update:
rustup update
If you don't have it already, you can get rustup from the appropriate page on our website.
This version of Rustup involves a significant number of internal cleanups, both in terms of the Rustup code and its tests. In addition to a lot of work on the codebase itself, due to the length of time since the last release this one has a record number of contributors and we thank you all for your efforts and time.
The headlines for this release are:
Full details are available in the changelog!
Rustup's documentation is also available in the rustup book.
Thanks again to all the contributors who made rustup 1.26.0 possible!
The Rust team is happy to announce a nice version of Rust, 1.69.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.69.0 with:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.69.0 on GitHub.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Please report any bugs you might come across!
Rust 1.69.0 introduces no major new features. However, it contains many small improvements, including over 3,000 commits from over 500 contributors.
Rust 1.29.0 added the cargo fix
subcommand to automatically fix some simple compiler warnings. Since then, the number of warnings that can be fixed automatically continues to steadily increase. In addition, support for automatically fixing some simple Clippy warnings has also been added.
In order to draw more attention to these increased capabilities, Cargo will now suggest running cargo fix
or cargo clippy --fix
when it detects warnings that are automatically fixable:
warning: unused import: `std::hash::Hash`
--> src/main.rs:1:5
|
1 | use std::hash::Hash;
| ^^^^^^^^^^^^^^^
|
= note: `#[warn(unused_imports)]` on by default
warning: `foo` (bin "foo") generated 1 warning (run `cargo fix --bin "foo"` to apply 1 suggestion)
Note that the full Cargo invocation shown above is only necessary if you want to precisely apply fixes to a single crate. If you want to apply fixes to all the default members of a workspace, then a simple cargo fix
(with no additional arguments) will suffice.
To improve compilation speed, Cargo now avoids emitting debug information in build scripts by default. There will be no visible effect when build scripts execute successfully, but backtraces in build scripts will contain less information.
If you want to debug a build script, you can add this snippet to your Cargo.toml
to emit debug information again:
[profile.dev.build-override]
debug = true
[profile.release.build-override]
debug = true
These APIs are now stable in const contexts:
Check out everything that changed in Rust, Cargo, and Clippy.
Many people came together to create Rust 1.69.0. We couldn't have done it without all of you. Thanks!
The Rust team has published a new point release of Rust, 1.68.2. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.68.2 with:
rustup update stable
If you don't have it already, you can get rustupfrom the appropriate page on our website, and check out thedetailed release notes for 1.68.2 on GitHub.
Rust 1.68.2 addresses GitHub's recent rotation of their RSA SSH host key, which happened on March 24th 2023 after their previous key accidentally leaked:
Support for @revoked entries in.ssh/known_hosts (along with a better error message when the unsupported @cert-authority
entries are used) is also included in Rust 1.68.2, as that change was a pre-requisite for backporting the hardcoded revocation.
If you cannot upgrade to Rust 1.68.2, we recommend following GitHub's instructionson updating the trusted keys in your system. Note that the keys bundled in Cargo are only used if no trusted key for github.com
is found on the system.
Many people came together to create Rust 1.68.2. We couldn't have done it without all of you. Thanks!
The Rust team has published a new point release of Rust, 1.68.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.68.1 with:
rustup update stable
If you don't have it already, you can get rustupfrom the appropriate page on our website, and check out thedetailed release notes for 1.68.1 on GitHub.
Rust 1.68.1 stable primarily contains a change to how Rust's CI builds the Windows MSVC compiler, no longer enabling LTO for the Rust code. This led to amiscompilation that the Rust team is debugging, but in the meantime we're reverting the change to enable LTO.
This is currently believed to have no effect on wider usage of ThinLTO. The Rust compiler used an unstable flag as part of the build process to enable ThinLTO despite compiling to a dylib.
There are a few other regression fixes included in the release:
Many people came together to create Rust 1.68.1. We couldn't have done it without all of you. Thanks!
The Rust team is happy to announce a new version of Rust, 1.68.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.68.0 with:
rustup update stable
If you don't have it already, you can getrustup from the appropriate page on our website, and check out the detailed release notes for 1.68.0on GitHub.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Pleasereport any bugs you might come across!
Cargo's "sparse" registry protocol has been stabilized for reading the index of crates, along with infrastructure at https://index.crates.io/
for those published in the primary crates.io registry. The prior git protocol (which is still the default) clones a repository that indexes all crates available in the registry, but this has started to hit scaling limitations, with noticeable delays while updating that repository. The new protocol should provide a significant performance improvement when accessing crates.io, as it will only download information about the subset of crates that you actually use.
To use the sparse protocol with crates.io, set the environment variableCARGO_REGISTRIES_CRATES_IO_PROTOCOL=sparse
, or edit your.cargo/config.toml fileto add:
[registries.crates-io]
protocol = "sparse"
The sparse protocol is currently planned to become the default for crates.io in the 1.70.0 release in a few months. For more information, please see the priorannouncementon the Inside Rust Blog, as well asRFC 2789and the currentdocumentationin the Cargo Book.
Pin
constructionThe new pin! macro constructs a Pin<&mut T>
from a T
expression, anonymously captured in local state. This is often called stack-pinning, but that "stack" could also be the captured state of an async fn
or block. This macro is similar to some crates, like tokio::pin!, but the standard library can take advantage of Pin
internals and temporary lifetime extensionfor a more expression-like macro.
/// Runs a future to completion.
fn block_on<F: Future>(future: F) -> F::Output {
let waker_that_unparks_thread = todo!();
let mut cx = Context::from_waker(&waker_that_unparks_thread);
// Pin the future so it can be polled.
let mut pinned_future = pin!(future);
loop {
match pinned_future.as_mut().poll(&mut cx) {
Poll::Pending => thread::park(),
Poll::Ready(result) => return result,
}
}
}
In this example, the original future
will be moved into a temporary local, referenced by the new pinned_future
with type Pin<&mut F>
, and that pin is subject to the normal borrow checker to make sure it can't outlive that local.
alloc
error handlerWhen allocation fails in Rust, APIs like Box::new
and Vec::push
have no way to indicate that failure, so some divergent execution path needs to be taken. When using the std
crate, the program will print to stderr
and abort. As of Rust 1.68.0, binaries which include std
will continue to have this behavior. Binaries which do not include std
, only including alloc
, will now panic!
on allocation failure, which may be further adjusted via a #[panic_handler]
if desired.
In the future, it's likely that the behavior for std
will also be changed to match that of alloc
-only binaries.
These APIs are now stable in const contexts:
Check out everything that changed inRust,Cargo, and Clippy.
Many people came together to create Rust 1.68.0. We couldn't have done it without all of you.Thanks!
The Rust team has published a new point release of Rust, 1.67.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.67.1 with:
rustup update stable
If you don't have it already, you can get rustupfrom the appropriate page on our website, and check out thedetailed release notes for 1.67.1 on GitHub.
Rust 1.67.1 fixes a regression for projects that link to thin archives (.a
files that reference external .o
objects). The new archive writer in 1.67.0 could not read thin archives as inputs, leading to the error "Unsupported archive identifier." The compiler now uses LLVM's archive writer again, until that format is supported in the new code.
Additionally, the clippy style lint uninlined_format_args
is temporarily downgraded to pedantic -- allowed by default. While the compiler has supported this format since Rust 1.58, rust-analyzer
does not support it yet, so it's not necessarily good to use that style everywhere possible.
The final change is a soundness fix in Rust's own bootstrap code. This had no known problematic uses, but it did raise an error when bootstrap was compiled with 1.67 itself, rather than the prior 1.66 release as usual.
Many people came together to create Rust 1.67.1. We couldn't have done it without all of you. Thanks!
The rustup working group is announcing the release of rustup version 1.25.2. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of rustup installed, getting rustup 1.25.2 is as easy as stopping any programs which may be using Rustup (e.g. closing your IDE) and running:
rustup self update
Rustup will also automatically update itself at the end of a normal toolchain update:
rustup update
If you don't have it already, you can get rustup from the appropriate page on our website.
This version of rustup fixes a warning incorrectly saying that signature verification failed for Rust releases. The warning was due to a dependency of Rustup including a time-based check preventing the use of SHA-1 from February 1st, 2023 onwards.
Unfortunately Rust's release signing key uses SHA-1 to sign its subkeys, which resulted in all signatures being marked as invalid. Rustup 1.25.2 temporarily fixes the problem by allowing again the use of SHA-1.
Signature verification is currently an experimental and incomplete feature included in rustup, as it's still missing crucial features like key rotation. Until the feature is complete and ready for use, its outcomes are only displayed as warnings without a way to turn them into errors.
This is done to avoid potentially breaking installations of rustup. Signature verification will error out on failure only after the design and implementation of the feature will be finished.
Thanks again to all the contributors who made rustup 1.25.2 possible!
The Rust team is happy to announce a new version of Rust, 1.67.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.67.0 with:
rustup update stable
If you don't have it already, you can getrustup from the appropriate page on our website, and check out the detailed release notes for 1.67.0on GitHub.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Pleasereport any bugs you might come across!
#[must_use]
effective on async fn
async
functions annotated with #[must_use]
now apply that attribute to the output of the returned impl Future
. The Future
trait itself is already annotated with #[must_use]
, so all types implementing Future
are automatically #[must_use]
, which meant that previously there was no way to indicate that the output of the Future
is itself significant and should be used in some way.
With 1.67, the compiler will now warn if the output isn't used in some way.
#[must_use]
async fn bar() -> u32 { 0 }
async fn caller() {
bar().await;
}
warning: unused output of future returned by `bar` that must be used
--> src/lib.rs:5:5
|
5 | bar().await;
| ^^^^^^^^^^^
|
= note: `#[warn(unused_must_use)]` on by default
std::sync::mpsc
implementation updatedRust's standard library has had a multi-producer, single-consumer channel since before 1.0, but in this release the implementation is switched out to be based on crossbeam-channel. This release contains no API changes, but the new implementation fixes a number of bugs and improves the performance and maintainability of the implementation.
Users should not notice any significant changes in behavior as of this release.
These APIs are now stable in const contexts:
Check out everything that changed inRust,Cargo, and Clippy.
Many people came together to create Rust 1.67.0. We couldn't have done it without all of you.Thanks!
Oh hey, it's another new team announcement. But I will admit: if you follow the RFCs repository, the Rust zulip, or were particularly observant on the GATs stabilization announcement post, then this might not be a surprise for you. In fact, this "new" team was officially established at the end of May last year.
There are a few reasons why we're sharing this post now (as opposed to months before or...never). First, the team finished a three day in-person/hybrid meetup at the beginning of December and we'd like to share the purpose and outcomes of that meeting. Second, posting this announcement now is just around 7 months of activity and we'd love to share what we've accomplished within this time. Lastly, as we enter into the new year of 2023, it's a great time to share a bit of where we expect to head in this year and beyond.
Rust has grown significantly in the last several years, in many metrics: users, contributors, features, tooling, documentation, and more. As it has grown, the list of things people want to do with it has grown just as quickly. On top of powerful and ergonomic features, the demand for powerful tools such as IDEs or learning tools for the language has become more and more apparent. New compilers (frontend and backend) are being written. And, to top it off, we want Rust to continue to maintain one of its core design principles: safety.
All of these points highlights some key needs: to be able to know how the Rust language should work, to be able to extend the language and compiler with new features in a relatively painless way, to be able to hook into the compiler and be able to query important information about programs, and finally to be able to maintain the language and compiler in an amenable and robust way. Over the years, considerable effort has been put into these needs, but we haven't quite achieved these key requirements.
To extend a little, and put some numbers to paper, there are currently around 220 open tracking issues for language, compiler, or types features that have been accepted but are not completely implemented, of which about half are at least 3 years old and many are several years older than that. Many of these tracking issues have been open for so long not solely because of bandwidth, but because working on these features is hard, in large part because putting the relevant semantics in context of the larger language properly is hard; it's not easy for anyone to take a look at them and know what needs to be done to finish them. It's clear that we still need better foundations for making changes to the language and compiler.
Another number that might shock you: there are currently 62 open unsoundness issues. This sounds much scarier than it really is: nearly all of these are edges of the compiler and language that have been found by people who specifically poke and prod to find them; in practice these will not pop up in the programs you write. Nevertheless, these are edges we want to iron out.
Moving forward, let's talk about a smaller subset of Rust rather than the entire language and compiler. Specifically, the parts relevant here include the type checker - loosely, defining the semantics and implementation of how variables are assigned their type, trait solving - deciding what traits are defined for which types, and borrow checking - proving that Rust's ownership model always holds. All of these can be thought of cohesively as the "type system".
As of RFC 3254, the above subset of the Rust language and compiler are under the purview of the types team. So, what exactly does this entail?
First, since around 2018, there existed the "traits working group", which had the primary goal of creating a performant and extensible definition and implementation of Rust's trait system (including the Chalk trait-solving library). As time progressed, and particularly in the latter half of 2021 into 2022, the working group's influence and responsibility naturally expanded to the type checker and borrow checker too - they are actually strongly linked and its often hard to disentangle the trait solver from the other two. So, in some ways, the types team essentially subsumes the former traits working group.
Another relevant working group is the polonius working group, which primarily works on the design and implementation of the Polonius borrow-checking library. While the working group itself will remain, it is now also under the purview of the types team.
Now, although the traits working group was essentially folded into the types team, the creation of a team has some benefits. First, like the style team (and many other teams), the types team is not a top level team. It actually, currently uniquely, has two parent teams: the lang and compiler teams. Both teams have decided to delegate decision-making authority covering the type system.
The language team has delegated the part of the design of type system. However, importantly, this design covers less of the "feel" of the features of type system and more of how it "works", with the expectation that the types team will advise and bring concerns about new language extensions where required. (This division is not strongly defined, but the expectation is generally to err on the side of more caution). The compiler team, on the other hand, has delegated the responsibility of defining and maintaining the implementation of the trait system.
One particular responsibility that has traditionally been shared between the language and compiler teams is the assessment and fixing of soundness bugs in the language related to the type system. These often arise from implementation-defined language semantics and have in the past required synchronization and input from both lang and compiler teams. In the majority of cases, the types team now has the authority to assess and implement fixes without the direct input from either parent team. This applies, importantly, for fixes that are technically backwards-incompatible. While fixing safety holes is not covered under Rust's backwards compatibility guarantees, these decisions are not taken lightly and generally require team signoff and are assessed for potential ecosystem breakage with crater. However, this can now be done under one team rather than requiring the coordination of two separate teams, which makes closing these soundness holes easier (I will discuss this more later.)
As mentioned above, a nearly essential element of the growing Rust language is to know how it should work (and to have this well documented). There are relatively recent efforts pushing for a Rust specification (like Ferrocene or this open RFC), but it would be hugely beneficial to have a formalized definition of the type system, regardless of its potential integration into a more general specification. In fact the existence of a formalization would allow a better assessment of potential new features or soundness holes, without the subtle intricacies of the rest of the compiler.
As far back as 2015, not long after the release of Rust 1.0, an experimental Rust trait solver called Chalk began to be written. The core idea of Chalk is to translate the surface syntax and ideas of the Rust trait system (e.g. traits, impls, where clauses) into a set of logic rules that can be solved using a Prolog-like solver. Then, once this set of logic and solving reaches parity with the trait solver within the compiler itself, the plan was to simply replace the existing solver. In the meantime (and continuing forward), this new solver could be used by other tools, such as rust-analyzer, where it is used today.
Now, given Chalk's age and the promises it had been hoped to be able to deliver on, you might be tempted to ask the question "Chalk, when?" - and plenty have. However, we've learned over the years that Chalk is likely not the correct long-term solution for Rust, for a few reasons. First, as mentioned a few times in this post, the trait solver is only but a part of a larger type system; and modeling how the entire type system fits together gives a more complete picture of its details than trying to model the parts separately. Second, the needs of the compiler are quite different than the needs of a formalization: the compiler needs performant code with the ability to track information required for powerful diagnostics; a good formalization is one that is not only complete, but also easy to maintain, read, and understand. Over the years, Chalk has tried to have both and it has so far ended up with neither.
So, what are the plans going forward? Well, first the types team has begun working on a formalization of the Rust typesystem, currently coined a-mir-formality. An initial experimental phase was written using PLT redex, but a Rust port is in-progress. There's lot to do still (including modeling more of the trait system, writing an RFC, and moving it into the rust-lang org), but it's already showing great promise.
Second, we've begun an initiative for writing a new trait solver in-tree. This new trait solver is more limited in scope than a-mir-formality (i.e. not intending to encompass the entire type system). In many ways, it's expected to be quite similar to Chalk, but leverage bits and pieces of the existing compiler and trait solver in order to make the transition as painless as possible. We do expect it to be pulled out-of-tree at some point, so it's being written to be as modular as possible. During out types team meetup earlier this month, we were able to hash out what we expect the structure of the solver to look like, and we've already gotten that merged into the source tree.
Finally, Chalk is no longer going to be a focus of the team. In the short term, it still may remain a useful tool for experimentation. As said before, rust-analyzer uses Chalk as its trait solver. It's also able to be used in rustc under an unstable feature flag. Thus, new ideas currently could be implemented in Chalk and battle-tested in practice. However, this benefit will likely not last long as a-mir-formality and the new in-tree trait solver because more usable and their interfaces becomes more accessible. All this is not to say that Chalk has been a failure. In fact, Chalk has taught us a lot about how to think about the Rust trait solver in a logical way and the current Rust trait solver has evolved over time to more closely model Chalk, even if incompletely. We expect to still support Chalk in some capacity for the time being, for rust-analyzer and potentially for those interested in experimenting with it.
As brought up previously, a big benefit of creating a new types team with delegated authority from both the lang and compiler teams is the authority to assess and fix unsoundness issues mostly independently. However, a secondary benefit has actually just been better procedures and knowledge-sharing that allows the members of the team to get on the same page for what soundness issues there are, why they exist, and what it takes to fix them. For example, during our meetup earlier this month, we were able to go through the full list of soundness issues (focusing on those relevant to the type system), identify their causes, and discuss expected fixes (though most require prerequisite work discussed in the previous section).
Additionally, the team has already made a number of soundness fixes and has a few more in-progress. I won't go into details, but instead am just opting to putting them in list form:
As you can see, we're making progress on closing soundness holes. These sometimes break code, as assessed by crater. However, we do what we can to mitigate this, even when the code being broken is technically unsound.
While it's not technically under the types team purview to propose and design new features (these fall more under lang team proper), there are a few instances where the team is heavily involved (if not driving) feature design.
These can be small additions, which are close to bug fixes. For example, this PR allows more permutations of lifetime outlives bounds than what compiled previously. Or, these PRs can be larger, more impactful changes, that don't fit under a "feature", but instead are tied heavily to the type system. For example, this PR makes the Sized
trait coinductive, which effectively makes more cyclic bounds compile (see this test for an example).
There are also a few larger features and feature sets that have been driven by the types team, largely due to the heavy intersection with the type system. Here are a few examples:
To conclude, let's put all of this onto a roadmap. As always, goals are best when they are specific, measurable, and time-bound. For this, we've decided to split our goals into roughly 4 stages: summer of 2023, end-of-year 2023, end-of-year 2024, and end-of-year 2027 (6 months, 1 year, 2 years, and 5 years). Overall, our goals are to build a platform to maintain a sound, testable, and documented type system that can scale to new features need by the Rust language. Furthermore, we want to cultivate a sustainable and open-source team (the types team) to maintain that platform and type system.
A quick note: some of the things here have not quite been explained in this post, but they've been included in the spirit of completeness. So, without further ado:
6 months
EOY 2023
EOY 2024
impl Trait
basically anywhereEOY 2027
It's an exciting time for Rust. As its userbase and popularity grows, the language does as well. And as the language grows, the need for a sustainable type system to support the language becomes ever more apparent. The project has formed this new types team to address this need and hopefully, in this post, you can see that the team has so far accomplished a lot. And we expect that trend to only continue over the next many years.
As always, if you'd like to get involved or have questions, please drop by the Rust zulip.
This is a cross-post of the official security advisory. The official advisory contains a signed version with our PGP key, as well.
The Rust Security Response WG was notified that Cargo did not perform SSH host key verification when cloning indexes and dependencies via SSH. An attacker could exploit this to perform man-in-the-middle (MITM) attacks.
This vulnerability has been assigned CVE-2022-46176.
When an SSH client establishes communication with a server, to prevent MITM attacks the client should check whether it already communicated with that server in the past and what the server's public key was back then. If the key changed since the last connection, the connection must be aborted as a MITM attack is likely taking place.
It was discovered that Cargo never implemented such checks, and performed no validation on the server's public key, leaving Cargo users vulnerable to MITM attacks.
All Rust versions containing Cargo before 1.66.1 are vulnerable.
Note that even if you don't explicitly use SSH for alternate registry indexes or crate dependencies, you might be affected by this vulnerability if you have configured git to replace HTTPS connections to GitHub with SSH (through git'surl..insteadOf setting), as that'd cause you to clone the crates.io index through SSH.
We will be releasing Rust 1.66.1 today, 2023-01-10, changing Cargo to check the SSH host key and abort the connection if the server's public key is not already trusted. We recommend everyone to upgrade as soon as possible.
Patch files for Rust 1.66.0 are also available here for custom-built toolchains.
For the time being Cargo will not ask the user whether to trust a server's public key during the first connection. Instead, Cargo will show an error message detailing how to add that public key to the list of trusted keys. Note that this might break your automated builds if the hosts you clone dependencies or indexes from are not already trusted.
Thanks to the Julia Security Team for disclosing this to us according to oursecurity policy!
We also want to thank the members of the Rust project who contributed to fixing this issue. Thanks to Eric Huss and Weihang Lo for writing and reviewing the patch, Pietro Albini for coordinating the disclosure and writing this advisory, and Josh Stone, Josh Triplett and Jacob Finkelman for advising during the disclosure.
The Rust team has published a new point release of Rust, 1.66.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.66.1 with:
rustup update stable
If you don't have it already, you can get rustupfrom the appropriate page on our website, and check out thedetailed release notes for 1.66.1 on GitHub.
Rust 1.66.1 fixes Cargo not verifying SSH host keys when cloning dependencies or registry indexes with SSH. This security vulnerability is tracked asCVE-2022-46176, and you can find more details in the advisory.
Many people came together to create Rust 1.66.1. We couldn't have done it without all of you. Thanks!
We are pleased to announce that Android platform support in Rust will be modernized in Rust 1.68 as we update the target NDK from r17 to r25. As a consequence the minimum supported API level will increase from 15 (Ice Cream Sandwich) to 19 (KitKat).
In NDK r23 Android switched to using LLVM's libunwind
for all architectures. This meant that
libgcc
to instead link against libunwind
. Following this update this workaround will no longer be necessary.Going forward the Android platform will target the most recent LTS NDK, allowing Rust developers to access platform features sooner. These updates should occur yearly and will be announced in release notes.
The Rust team is happy to announce a new version of Rust, 1.66.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.66.0 with:
rustup update stable
If you don't have it already, you can getrustup from the appropriate page on our website, and check out the detailed release notes for 1.66.0on GitHub.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Pleasereport any bugs you might come across!
Enums with integer representations can now use explicit discriminants, even when they have fields.
#[repr(u8)]
enum Foo {
A(u8),
B(i8),
C(bool) = 42,
}
Previously, you could use explicit discriminants on enums with representations, but only if none of their variants had fields. Explicit discriminants are useful when passing values across language boundaries where the representation of the enum needs to match in both languages. For example,
#[repr(u8)]
enum Bar {
A,
B,
C = 42,
}
Here the Bar
enum is guaranteed to have the same layout as u8
. Each variant will use either the specified discriminant value or default to starting with 0.
assert_eq!(0, Bar::A as u8);
assert_eq!(1, Bar::B as u8);
assert_eq!(42, Bar::C as u8);
You could even add fields to enums with #[repr(Int)]
, and they would be laid out in a predictable way. Previously, however, you could not use these features together. That meant that making Foo::C
's discriminant equal to 42 as above would be harder to achieve. You would need to add 41 hidden variants in between as a workaround with implicit discriminants!
Starting in Rust 1.66.0, the above example compiles, allowing you to use explicit discriminants on any enum with a #[repr(Int)]
attribute.
core::hint::black_box
When benchmarking or examining the machine code produced by a compiler, it's often useful to prevent optimizations from occurring in certain places. In the following example, the function push_cap
executes Vec::push
4 times in a loop:
fn push_cap(v: &mut Vec<i32>) {
for i in 0..4 {
v.push(i);
}
}
pub fn bench_push() -> Duration {
let mut v = Vec::with_capacity(4);
let now = Instant::now();
push_cap(&mut v);
now.elapsed()
}
If you inspect the optimized output of the compiler on x86_64, you'll notice that it looks rather short:
example::bench_push:
sub rsp, 24
call qword ptr [rip + std::time::Instant::now@GOTPCREL]
lea rdi, [rsp + 8]
mov qword ptr [rsp + 8], rax
mov dword ptr [rsp + 16], edx
call qword ptr [rip + std::time::Instant::elapsed@GOTPCREL]
add rsp, 24
ret
In fact, the entire function push_cap
we wanted to benchmark has been optimized away!
We can work around this using the newly stabilized black_box
function. Functionally, black_box
is not very interesting: it takes the value you pass it and passes it right back. Internally, however, the compiler treats black_box
as a function that could do anything with its input and return any value (as its name implies).
This is very useful for disabling optimizations like the one we see above. For example, we can hint to the compiler that the vector will actually be used for something after every iteration of the for loop.
use std::hint::black_box;
fn push_cap(v: &mut Vec<i32>) {
for i in 0..4 {
v.push(i);
black_box(v.as_ptr());
}
}
Now we can find the unrolled for loop in our optimized assembly output:
mov dword ptr [rbx], 0
mov qword ptr [rsp + 8], rbx
mov dword ptr [rbx + 4], 1
mov qword ptr [rsp + 8], rbx
mov dword ptr [rbx + 8], 2
mov qword ptr [rsp + 8], rbx
mov dword ptr [rbx + 12], 3
mov qword ptr [rsp + 8], rbx
You can also see a side effect of calling black_box
in this assembly output. The instruction mov qword ptr [rsp + 8], rbx
is uselessly repeated after every iteration. This instruction writes the address v.as_ptr()
as the first argument of the function, which is never actually called.
Notice that the generated code is not at all concerned with the possibility of allocations introduced by the push
call. This is because the compiler is still using the fact that we called Vec::with_capacity(4)
in the bench_push
function. You can play around with the placement of black_box
, or try using it in multiple places, to see its effects on compiler optimizations.
In Rust 1.62.0 we introduced cargo add
, a command line utility to add dependencies to your project. Now you can use cargo remove
to remove dependencies.
There are other changes in the Rust 1.66 release, including:
..=X
ranges in patterns.Check out everything that changed inRust,Cargo, and Clippy.
Many people came together to create Rust 1.66.0. We couldn't have done it without all of you.Thanks!
The 2022 State of Rust Survey is here!
It's that time again! Time for us to take a look at who the Rust community is composed of, how the Rust project is doing, and how we can improve the Rust programming experience. The Rust Survey working group is pleased to announce our 2022 State of Rust Survey! Whether or not you use Rust today, we want to know your opinions. Your responses will help the project understand its strengths and weaknesses, and establish development priorities for the future.
Completing this survey should take about 5–20 minutes and is anonymous. We will be accepting submissions for the next two weeks (until the 19th of December), and we will share our findings on blog.rust-lang.org sometime in early 2023. You can also check out last year’s results.
We're happy to be offering the survey in the following languages. If you speak multiple languages, please pick one.
Please help us spread the word by sharing the survey link on your social network feeds, at meetups, around your office, and in other communities.
If you have any questions, please see our frequently asked questions.
Finally, we wanted to thank everyone who helped develop, polish, and test the survey.
The Rust team is happy to announce a new version of Rust, 1.65.0. Rust is a programming language empowering everyone to build reliable and efficient software.
Before going into the details of the new Rust release, we'd like to draw attention to the tragic death of Mahsa Amini and the death and violent suppression of many others, by the religious morality police of Iran. See https://en.wikipedia.org/wiki/Mahsa%5FAmini%5Fprotests for more details. We stand in solidarity with the people in Iran struggling for human rights.
If you have a previous version of Rust installed via rustup, you can get 1.65.0 with:
rustup update stable
If you don't have it already, you can getrustup from the appropriate page on our website, and check out the detailed release notes for 1.65.0on GitHub.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Pleasereport any bugs you might come across!
Lifetime, type, and const generics can now be defined on associated types, like so:
trait Foo {
type Bar<'x>;
}
It's hard to put into few words just how useful these can be, so here are a few example traits, to get a sense of their power:
/// An `Iterator`-like trait that can borrow from `Self`
trait LendingIterator {
type Item<'a> where Self: 'a;
fn next<'a>(&'a mut self) -> Option<Self::Item<'a>>;
}
/// Can be implemented over smart pointers, like `Rc` or `Arc`,
/// in order to allow being generic over the pointer type
trait PointerFamily {
type Pointer<T>: Deref<Target = T>;
fn new<T>(value: T) -> Self::Pointer<T>;
}
/// Allows borrowing an array of items. Useful for
/// `NdArray`-like types that don't necessarily store
/// data contiguously.
trait BorrowArray<T> {
type Array<'x, const N: usize> where Self: 'x;
fn borrow_array<'a, const N: usize>(&'a self) -> Self::Array<'a, N>;
}
As you can see, GATs are quite versatile and enable a number of patterns that are not currently able to be written. For more information, check out the post announcing thepush for stabilizationpublished last year or thestabilization announcement postpublished last week. The former goes into a bit more depth of a couple of the examples above, while the latter talks about some of the known limitations of this stabilization.
More in depth reading can be found in the associated types section of the nightly referenceor the original RFC (which was initially opened over 6.5 years ago!).
let
-else
statementsThis introduces a new type of let
statement with a refutable pattern and a diverging else
block that executes when that pattern doesn't match.
let PATTERN: TYPE = EXPRESSION else {
DIVERGING_CODE;
};
Normal let
statements can only use irrefutable patterns, statically known to always match. That pattern is often just a single variable binding, but may also unpack compound types like structs, tuples, and arrays. However, that was not usable for conditional matches, like pulling out a variant of an enum -- until now! With let
-else
, a refutable pattern can match and bind variables in the surrounding scope like a normal let
, or else diverge (e.g. break
,return
, panic!
) when the pattern doesn't match.
fn get_count_item(s: &str) -> (u64, &str) {
let mut it = s.split(' ');
let (Some(count_str), Some(item)) = (it.next(), it.next()) else {
panic!("Can't segment count item pair: '{s}'");
};
let Ok(count) = u64::from_str(count_str) else {
panic!("Can't parse integer: '{count_str}'");
};
(count, item)
}
assert_eq!(get_count_item("3 chairs"), (3, "chairs"));
The scope of name bindings is the main thing that makes this different frommatch
or if let
-else
expressions. You could previously approximate these patterns with an unfortunate bit of repetition and an outer let
:
let (count_str, item) = match (it.next(), it.next()) {
(Some(count_str), Some(item)) => (count_str, item),
_ => panic!("Can't segment count item pair: '{s}'"),
};
let count = if let Ok(count) = u64::from_str(count_str) {
count
} else {
panic!("Can't parse integer: '{count_str}'");
};
break
from labeled blocksPlain block expressions can now be labeled as a break
target, terminating that block early. This may sound a little like a goto
statement, but it's not an arbitrary jump, only from within a block to its end. This was already possible with loop
blocks, and you may have seen people write loops that always execute only once, just to get a labeled break
.
Now there's a language feature specifically for that! Labeled break
may also include an expression value, just as with loops, letting a multi-statement block have an early "return" value.
let result = 'block: {
do_thing();
if condition_not_met() {
break 'block 1;
}
do_next_thing();
if condition_not_met() {
break 'block 2;
}
do_last_thing();
3
};
Back in Rust 1.51, the compiler team added support for split debug informationon macOS, and now this option is stable for use on Linux as well.
-Csplit-debuginfo=unpacked
will split debuginfo out into multiple .dwo
DWARF object files.-Csplit-debuginfo=packed
will produce a single .dwp
DWARF package alongside your output binary with all the debuginfo packaged together.-Csplit-debuginfo=off
is still the default behavior, which includes DWARF data in .debug_*
ELF sections of the objects and final binary.Split DWARF lets the linker avoid processing the debuginfo (because it isn't in the object files being linked anymore), which can speed up link times!
Other targets now also accept -Csplit-debuginfo
as a stable option with their platform-specific default value, but specifying other values is still unstable.
The following methods and trait implementations are now stabilized:
Of particular note, the Backtrace
API allows capturing a stack backtrace at any time, using the same platform-specific implementation that usually serves panic backtraces. This may be useful for adding runtime context to error types, for example.
These APIs are now usable in const contexts:
rust-analyzer
.There are other changes in the Rust 1.65 release, including:
Check out everything that changed inRust,Cargo, and Clippy.
Many people came together to create Rust 1.65.0. We couldn't have done it without all of you.Thanks!
As of Rust 1.65, which is set to release on November 3rd, generic associated types (GATs) will be stable — over six and a half years after the original RFC was opened. This is truly a monumental achievement; however, as with a few of the other monumental features of Rust, like async
or const generics, there are limitations in the initial stabilization that we plan to remove in the future.
The goal of this post is not to teach about GATs, but rather to briefly introduce them to any readers that might not know what they are and to enumerate a few of the limitations in initial stabilization that users are most likely to run into. More detailed information can be found in the RFC, in the GATs initiative repository, in the previous blog post during the start of the stabilization push, in the associated items section in the nightly reference, or in the open issues on Github for GATs
At its core, generic associated types allow you to have generics (type, lifetime, or const) on associated types. Note that this is really just rounding out the places where you can put generics: for example, you can already have generics on freestanding type aliases and on functions in traits. Now you can just have generics on type aliases in traits (which we just call associated types). Here's an example of what a trait with a GAT would look like:
trait LendingIterator {
type Item<'a> where Self: 'a;
fn next<'a>(&'a mut self) -> Self::Item<'a>;
}
Most of this should look familiar; this trait looks very similar to the Iterator trait from the standard library. Fundamentally, this version of the trait allows the next
function to return an item that borrows from self
. For more detail about the example, as well as some info on what that where Self: 'a
is for, check out the push for stabilization post.
In general, GATs provide a foundational basis for a vast range of patterns and APIs. If you really want to get a feel for how many projects how been blocked on GATs being stable, go scroll through either the tracking issue: you will find numerous issues from other projects linking to those threads over the years saying something along the lines of "we want the API to look like X, but for that we need GATs" (or see this comment that has some of these put together already). If you're interested in how GATs enable a library to do zero-copy parsing, resulting in nearly a ten-fold performance increase, you might be interested in checking out a blog post on it by Niko Matsakis.
All in all, even if you won't need to use GATs directly, it's very possible that the libraries you use will use GATs either internally or publically for ergonomics, performance, or just because that's the only way the implementation works.
As alluded to before, this stabilization is not without its bugs and limitations. This is not atypical compared to prior large language features. We plan to fix these bugs and remove these limitations as part of ongoing efforts driven by the newly-formed types team. (Stayed tuned for more details in an official announcement soon!)
Here, we'll go over just a couple of the limitations that we've identified that users might run into.
'static
requirement from higher-ranked trait boundsConsider the following code:
trait LendingIterator {
type Item<'a> where Self: 'a;
}
pub struct WindowsMut<'x, T> {
slice: &'x [T],
}
impl<'x, T> LendingIterator for WindowsMut<'x, T> {
type Item<'a> = &'a mut [T] where Self: 'a;
}
fn print_items<I>(iter: I)
where
I: LendingIterator,
for<'a> I::Item<'a>: Debug,
{ ... }
fn main() {
let mut array = [0; 16];
let slice = &mut array;
let windows = WindowsMut { slice };
print_items::<WindowsMut<'_, usize>>(windows);
}
Here, imagine we wanted to have a LendingIterator
where the items are overlapping slices of an array. We also have a function print_items
that prints every item of a LendingIterator
, as long as they implement Debug
. This all seems innocent enough, but the above code doesn't compile — even though it should. Without going into details here, the for<'a> I::Item<'a>: Debug
currently implies that I::Item<'a>
must outlive 'static
.
This is not really a nice bug. And of all the ones we'll mention today, this will likely be the one that is most limiting, annoying, and tough to figure out. This pops up much more often with GATs, but can be found with code that doesn't use GATs at all. Unfortunately, fixing this requires some refactorings to the compiler that isn't a short-term project. It is on the horizon though. The good news is that, in the meantime, we are least working on improving the error message you get from this code. This is what it will look like in the upcoming stabilization:
error[E0597]: `array` does not live long enough
|
| let slice = &mut array;
| ^^^^^^^^^^ borrowed value does not live long enough
| let windows = WindowsMut { slice };
| print_items::<WindowsMut<'_, usize>>(windows);
| -------------------------------------------- argument requires that `array` is borrowed for `'static`
| }
| - `array` dropped here while still borrowed
|
note: due to current limitations in the borrow checker, this implies a `'static` lifetime
|
| for<'a> I::Item<'a>: Debug,
| ^^^^
It's not perfect, but it's something. It might not cover all cases, but if have a for<'a> I::Item<'a>: Trait
bound somewhere and get an error that says something doesn't live long enough, you might be running into this bug. We're actively working to fix this. However, this error doesn't actually come up as often as you might expect while reading this (from our experience), so we feel the feature is still immensely useful enough even with it around.
So, this one is a simple one. Making traits with GATs object safe is going to take a little bit of design work for its implementation. To get an idea of the work left to do here, let's start with a bit of code that you could write on stable today:
fn takes_iter(_: &dyn Iterator) {}
Well, you can write this, but it doesn't compile:
error[E0191]: the value of the associated type `Item` (from trait `Iterator`) must be specified
--> src/lib.rs:1:23
|
1 | fn takes_iter(_: &dyn Iterator) {}
| ^^^^^^^^ help: specify the associated type: `Iterator<Item = Type>`
For a trait object to be well-formed, it must specify a value for all associated types. For the same reason, we don't want to accept the following:
fn no_associated_type(_: &dyn LendingIterator) {}
However, GATs introduce an extra bit of complexity. Take this code:
fn not_fully_generic(_: &dyn LendingIterator<Item<'static> = &'static str>) {}
So, we've specified the value of the associated type for one value of of the Item
's lifetime ('static
), but not for any value, like this:
fn fully_generic(_: &dyn for<'a> LendingIterator<Item<'a> = &'a str>) {}
While we have a solid idea of how to implement requirement in some future iterations of the trait solver (that uses more logical formulations), implementing it in the current trait solver is more difficult. Thus, we've chosen to hold off on this for now.
Keeping with the LendingIterator
example, let's start by looking at two methods on Iterator
: for_each
and filter
:
trait Iterator {
type Item;
fn for_each<F>(self, f: F)
where
Self: Sized,
F: FnMut(Self::Item);
fn filter<P>(self, predicate: P) -> Filter<Self, P>
where
Self: Sized,
P: FnMut(&Self::Item) -> bool;
}
Both of these take a function as an argument. Closures are often used these. Now, let's look at the LendingIterator
definitions:
trait LendingIterator {
type Item<'a> where Self: 'a;
fn for_each<F>(mut self, mut f: F)
where
Self: Sized,
F: FnMut(Self::Item<'_>);
fn filter<P>(self, predicate: P) -> Filter<Self, P>
where
Self: Sized,
P: FnMut(&Self::Item<'_>) -> bool;
}
Looks simple enough, but if it really was, would it be here? Let's start by looking at what happens when we try to use for_each
:
fn iterate<T, I: for<'a> LendingIterator<Item<'a> = &'a T>>(iter: I) {
iter.for_each(|_: &T| {})
}
error: `I` does not live long enough
|
| iter.for_each(|_: &T| {})
| ^^^^^^^^^^
Well, that isn't great. Turns out, this is pretty closely related to the first limitation we talked about earlier, even though the borrow checker does play a role here.
On the other hand, let's look at something that's very clearly a borrow checker problem, by looking at an implementation of the Filter
struct returned by the filter
method:
impl<I: LendingIterator, P> LendingIterator for Filter<I, P>
where
P: FnMut(&I::Item<'_>) -> bool, // <- the bound from above, a function
{
type Item<'a> = I::Item<'a> where Self: 'a; // <- Use the underlying type
fn next(&mut self) -> Option<I::Item<'_>> {
// Loop through each item in the underlying `LendingIterator`...
while let Some(item) = self.iter.next() {
// ...check if the predicate holds for the item...
if (self.predicate)(&item) {
// ...and return it if it does
return Some(item);
}
}
// Return `None` when we're out of items
return None;
}
}
Again, the implementation here shouldn't seem surprising. We, of course, run into a borrow checker error:
error[E0499]: cannot borrow `self.iter` as mutable more than once at a time
--> src/main.rs:28:32
|
27 | fn next(&mut self) -> Option<I::Item<'_>> {
| - let's call the lifetime of this reference `'1`
28 | while let Some(item) = self.iter.next() {
| ^^^^^^^^^^^^^^^^ `self.iter` was mutably borrowed here in the previous iteration of the loop
29 | if (self.predicate)(&item) {
30 | return Some(item);
| ---------- returning this value requires that `self.iter` is borrowed for `'1`
This is a known limitation in the current borrow checker and should be solved in some future iteration (like Polonius).
The last limitation we'll talk about today is a bit different than the others; it's not a bug and it shouldn't prevent any programs from compiling. But it all comes back to that where Self: 'a
clause you've seen in several parts of this post. As mentioned before, if you're interested in digging a bit into why that clause is required, see the push for stabilization post.
There is one not-so-ideal requirement about this clause: you must write it on the trait. Like with where clauses on functions, you cannot add clauses to associated types in impls that aren't there in the trait. However, if you didn't add this clause, a large set of potential impls of the trait would be disallowed.
To help users not fall into the pitfall of accidentally forgetting to add this (or similar clauses that end up with the same effect for a different set of generics), we've implemented a set of rules that must be followed for a trait with GATs to compile. Let's first look at the error without writing the clause:
trait LendingIterator {
type Item<'a>;
fn next<'a>(&'a mut self) -> Self::Item<'a>;
}
error: missing required bound on `Item`
--> src/lib.rs:2:5
|
2 | type Item<'a>;
| ^^^^^^^^^^^^^-
| |
| help: add the required where clause: `where Self: 'a`
|
= note: this bound is currently required to ensure that impls have maximum flexibility
= note: we are soliciting feedback, see issue #87479 <https://github.com/rust-lang/rust/issues/87479> for more information
This error should hopefully be helpful (you can even cargo fix
it!). But, what exactly are these rules? Well, ultimately, they end up being somewhat simple: for methods that use the GAT, any bounds that can be proven must also be present on the GAT itself.
Okay, so how did we end up with the required Self: 'a
bound. Well, let's take a look at the next
method. It returns Self::Item<'a>
, and we have an argument &'a mut self
. We're getting a bit into the details of the Rust language, but because of that argument, we know that Self: 'a
must hold. So, we require that bound.
We're requiring these bounds now to leave room in the future to potentially imply these automatically (and of course because it should help users write traits with GATs). They shouldn't interfere with any real use-cases, but if you do encounter a problem, check out the issue mentioned in the error above. And if you want to see a fairly comprehensive testing of different scenarios on what bounds are required and when, check out the relevant test file.
Hopefully the limitations brought up here and explanations thereof don't detract from overall excitement of GATs stabilization. Sure, these limitations do, well, limit the number of things you can do with GATs. However, we would not be stabilizing GATs if we didn't feel that GATs are not still very useful. Additionally, we wouldn't be stabilizing GATs if we didn't feel that the limitations weren't solvable (and in a backwards-compatible manner).
To conclude things, all the various people involved in getting this stabilization to happen deserve the utmost thanks. As said before, it's been 6.5 years coming and it couldn't have happened without everyone's support and dedication. Thanks all!
The Rust team is happy to announce a new version of Rust, 1.64.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.64.0 with:
rustup update stable
If you don't have it already, you can getrustup from the appropriate page on our website, and check out the detailed release notes for 1.64.0on GitHub.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Pleasereport any bugs you might come across!
.await
with IntoFuture
Rust 1.64 stabilizes theIntoFuturetrait. IntoFuture
is a trait similar toIntoIterator, but rather than supporting for ... in ...
loops, IntoFuture
changes how.await
works. With IntoFuture
, the .await
keyword can await more than just futures; it can await anything which can be converted into a Future
viaIntoFuture
- which can help make your APIs more user-friendly!
Take for example a builder which constructs requests to some storage provider over the network:
pub struct Error { ... }
pub struct StorageResponse { ... }:
pub struct StorageRequest(bool);
impl StorageRequest {
/// Create a new instance of `StorageRequest`.
pub fn new() -> Self { ... }
/// Decide whether debug mode should be enabled.
pub fn set_debug(self, b: bool) -> Self { ... }
/// Send the request and receive a response.
pub async fn send(self) -> Result<StorageResponse, Error> { ... }
}
Typical usage would likely look something like this:
let response = StorageRequest::new() // 1. create a new instance
.set_debug(true) // 2. set some option
.send() // 3. construct the future
.await?; // 4. run the future + propagate errors
This is not bad, but we can do better here. Using IntoFuture
we can combine_"construct the future"_ (line 3) and "run the future" (line 4) into a single step:
let response = StorageRequest::new() // 1. create a new instance
.set_debug(true) // 2. set some option
.await?; // 3. construct + run the future + propagate errors
We can do this by implementing IntoFuture
for StorageRequest
. IntoFuture
requires us to have a named future we can return, which we can do by creating a "boxed future" and defining a type alias for it:
// First we must import some new types into the scope.
use std::pin::Pin;
use std::future::{Future, IntoFuture};
pub struct Error { ... }
pub struct StorageResponse { ... }
pub struct StorageRequest(bool);
impl StorageRequest {
/// Create a new instance of `StorageRequest`.
pub fn new() -> Self { ... }
/// Decide whether debug mode should be enabled.
pub fn set_debug(self, b: bool) -> Self { ... }
/// Send the request and receive a response.
pub async fn send(self) -> Result<StorageResponse, Error> { ... }
}
// The new implementations:
// 1. create a new named future type
// 2. implement `IntoFuture` for `StorageRequest`
pub type StorageRequestFuture = Pin<Box<dyn Future<Output = Result<StorageResponse, Error> + Send + 'static>>
impl IntoFuture for StorageRequest {
type IntoFuture = StorageRequestFuture;
type Output = <StorageRequestFuture as Future>::Output;
fn into_future(self) -> Self::IntoFuture {
Box::pin(self.send())
}
}
This takes a bit more code to implement, but provides a simpler API for users.
In the future, the Rust Async WG hopes to simplify the creating new named futures by supporting impl Trait in type aliases (Type Alias Impl Trait or TAIT). This should make implementing IntoFuture
easier by simplifying the type alias' signature, and make it more performant by removing the Box
from the type alias.
When calling or being called by C ABIs, Rust code can use type aliases likec_uint
or c_ulong
to match the corresponding types from C on any target, without requiring target-specific code or conditionals.
Previously, these type aliases were only available in std
, so code written for embedded targets and other scenarios that could only use core
or alloc
could not use these types.
Rust 1.64 now provides all of the c_*
type aliases incore::ffi, as well ascore::ffi::CStr for working with C strings. Rust 1.64 also providesalloc::ffi::CStringfor working with owned C strings using only the alloc
crate, rather than the full std
library.
rust-analyzer is now included as part of the collection of tools included with Rust. This makes it easier to download and access rust-analyzer, and makes it available on more platforms. It is available as a rustup component which can be installed with:
rustup component add rust-analyzer
At this time, to run the rustup-installed version, you need to invoke it this way:
rustup run rust-analyzer
The next release of rustup will provide a built-in proxy so that running the executable rust-analyzer
will launch the appropriate version.
Most users should continue to use the releases provided by the rust-analyzer team (available on the rust-analyzer releases page), which are published more frequently. Users of the official VSCode extensionare not affected since it automatically downloads and updates releases in the background.
When working with collections of related libraries or binary crates in one Cargo workspace, you can now avoid duplication of common field values between crates, such as common version numbers, repository URLs, or rust-version
. This also helps keep these values in sync between crates when updating them. For more details, seeworkspace.package,workspace.dependencies, and "inheriting a dependency from a workspace".
When building for multiple targets, you can now pass multiple --target
options to cargo build
, to build all of those targets at once. You can also setbuild.targetto an array of multiple targets in .cargo/config.toml
to build for multiple targets by default.
The following methods and trait implementations are now stabilized:
These types were previously stable in std::ffi
, but are now also available incore
and alloc
:
These types were previously stable in std::os::raw
, but are now also available in core::ffi
and std::ffi
:
We've stabilized some helpers for use with Poll
, the low-level implementation underneath futures:
In the future, we hope to provide simpler APIs that require less use of low-level details like Poll
and Pin
, but in the meantime, these helpers make it easier to write such code.
These APIs are now usable in const contexts:
linux
targets now require at least Linux kernel 3.2 (except for targets which already required a newer kernel), and linux-gnu
targets now require glibc 2.17 (except for targets which already required a newer glibc).Ipv4Addr
, Ipv6Addr
,SocketAddrV4
and SocketAddrV6
to be more compact and memory efficient. This internal representation was never exposed, but some crates relied on it anyway by using std::mem::transmute
, resulting in invalid memory accesses. Such internal implementation details of the standard library are_never_ considered a stable interface. To limit the damage, we worked with the authors of all of the still-maintained crates doing so to release fixed versions, which have been out for more than a year. The vast majority of impacted users should be able to mitigate with a cargo update
.There are other changes in the Rust 1.64 release, including:
unused_tuple_struct_fields
lint to get the same warnings about unused fields in a tuple struct. In future versions, we plan to make this lint warn by default. Fields of type unit (()
) do not produce this warning, to make it easier to migrate existing code without having to change tuple indices.Check out everything that changed inRust,Cargo, and Clippy.
Many people came together to create Rust 1.64.0. We couldn't have done it without all of you.Thanks!
In a recent Rust issue (#99923), a developer noted that the upcoming 1.64-beta version of Rust had started signalling errors on their crate,icu4x. The icu4x
crate uses unsafe code during const evaluation.Const evaluation, or just "const-eval", runs at compile-time but produces values that may end up embedded in the final object code that executes at runtime.
Rust's const-eval system supports both safe and unsafe Rust, but the rules for what unsafe code is allowed to do during const-eval are even more strict than what is allowed for unsafe code at runtime. This post is going to go into detail about one of those rules.
(Note: If your const
code does not use any unsafe
blocks or call any const fn
with an unsafe
block, then you do not need to worry about this!)
The problem, reduced over the course of the comment thread of #99923, is that certain static initialization expressions (see below) are defined as having undefined behavior (UB) at compile time (playground):
pub static FOO: () = unsafe {
let illegal_ptr2int: usize = std::mem::transmute(&());
let _copy = illegal_ptr2int;
};
(Many thanks to @eddyb
for the minimal reproduction!)
The code above was accepted by Rust versions 1.63 and earlier, but in the Rust 1.64-beta, it now causes a compile time error with the following message:
error[E0080]: could not evaluate static initializer
--> demo.rs:3:17
|
3 | let _copy = illegal_ptr2int;
| ^^^^^^^^^^^^^^^ unable to turn pointer into raw bytes
|
= help: this code performed an operation that depends on the underlying bytes representing a pointer
= help: the absolute address of a pointer is not known at compile-time, so such operations are not supported
As the message says, this operation is not supported: the transmute
above is trying to reinterpret the memory address &()
as an integer of typeusize
. The compiler cannot predict what memory address the ()
would be associated with at execution time, so it refuses to allow that reinterpretation.
When you write safe Rust, then the compiler is responsible for preventing undefined behavior. When you write any unsafe code (be it const or non-const), you are responsible for preventing UB, and during const-eval, the rules about what unsafe code has defined behavior are even more strict than the analogous rules governing Rust's runtime semantics. (In other words, more code is classified as "UB" than you may have otherwise realized.)
If you hit undefined behavior during const-eval, the Rust compiler will protect itself from adverse effects such as the undefined behavior leaking into the type system, but there are few guarantees other than that. For example, compile-time UB could lead to runtime UB. Furthermore, if you have UB at const-eval time, there is no guarantee that your code will be accepted from one compiler version to another.
You might be thinking: "it used to be accepted; therefore, there must be some value for the memory address that the previous version of the compiler was using here."
But such reasoning would be based on an imprecise view of what the Rust compiler was doing here.
The const-eval machinery of the Rust compiler is built upon the MIR-interpreterMiri, which uses an abstract model of a hypothetical machine as the foundation for evaluating such expressions. This abstract model doesn't have to represent memory addresses as mere integers; in fact, to support Miri's fine-grained checking for UB, it uses a much richer datatype for the values that are held in the abstract memory store.
The details of Miri's value representation do not matter too much for our discussion here. We merely note that earlier versions of the compiler silently accepted expressions that seemed to transmute memory addresses into integers, copied them around, and then transmuted them back into addresses; but that was not what was acutally happening under the hood. Instead, what was happening was that the Miri values were passed around blindly (after all, the whole point of transmute is that it does no transformation on its input value, so it is a no-op in terms of its operational semantics).
The fact that it was passing a memory address into a context where you would expect there to always be an integer value would only be caught, if at all, at some later point.
For example, the const-eval machinery rejects code that attempts to embed the transmuted pointer into a value that could be used by runtime code, like so (playground):
pub static FOO: usize = unsafe {
let illegal_ptr2int: usize = std::mem::transmute(&());
illegal_ptr2int
};
Likewise, it rejects code that attempts to perform arithmetic on that non-integer value, like so (playground):
pub static FOO: () = unsafe {
let illegal_ptr2int: usize = std::mem::transmute(&());
let _incremented = illegal_ptr2int + 1;
};
Both of the latter two variants are rejected in stable Rust, and have been for as long as Rust has accepted pointer-to-integer conversions in static initializers (see e.g. Rust 1.52).
In fact, all of the examples provided above are exhibiting undefined behavior according to the semantics of Rust's const-eval system.
The first example with _copy
was accepted in Rust versions 1.46 through 1.63 because of Miri implementation artifacts. Miri puts considerable effort into detecting UB, but does not catch all instances of it. Furthermore, by default, Miri's detection can be delayed to a point far after where the actual problematic expression is found.
But with nightly Rust, we can opt into extra checks for UB that Miri provides, by passing the unstable flag -Z extra-const-ub-checks
. If we do that, then for_all_ of the above examples we get the same result:
error[E0080]: could not evaluate static initializer
--> demo.rs:2:34
|
2 | let illegal_ptr2int: usize = std::mem::transmute(&());
| ^^^^^^^^^^^^^^^^^^^^^^^^ unable to turn pointer into raw bytes
|
= help: this code performed an operation that depends on the underlying bytes representing a pointer
= help: the absolute address of a pointer is not known at compile-time, so such operations are not supported
The earlier examples had diagnostic output that put the blame in a misleading place. With the more precise checking -Z extra-const-ub-checks
enabled, the compiler highlights the expression where we can first witness UB: the original transmute itself! (Which was stated at the outset of this post; here we are just pointing out that these tools can pinpoint the injection point more precisely.)
Why not have these extra const-ub checks on by default? Well, the checks introduce performance overhead upon Rust compilation time, and we do not know if that overhead can be made acceptable. (However, recent debateamong Miri developers indicates that the inherent cost here might not be as bad as they had originally thought. Perhaps a future version of the compiler will have these extra checks on by default.)
You might well be wondering at this point: "Wait, when is it okay to transmute a pointer to a usize
during const evaluation?" And the answer is simple: "Never."
Transmuting a pointer to a usize during const-eval has always been undefined behavior, ever since const-eval added support fortransmute
and union
. You can read more about this in theconst_fn_transmute
/ const_fn_union
stabilization report, specifically the subsection entitled "Pointer-integer-transmutes". (It is also mentioned in the documentation for transmute
.)
Thus, we can see that the classification of the above examples as UB during const evaluation is not a new thing at all. The only change here was that Miri had some internal changes that made it start detecting the UB rather than silently ignoring it.
This means the Rust compiler has a shifting notion of what UB it will explicitly catch. We anticipated this: RFC 3016, "const UB", explicitlysays:
[...] there is no guarantee that UB is reliably detected during CTFE. This can change from compiler version to compiler version: CTFE code that causes UB could build fine with one compiler and fail to build with another. (This is in accordance with the general policy that unsound code is not subject to stability guarantees.)
Having said that: So much of Rust's success has been built around the trust that we have earned with our community. Yes, the project has always reserved the right to make breaking changes when resolving soundness bugs; but we have also strived to mitigate such breakage whenever feasible, via things likefuture-incompatible lints.
Today, with our current const-eval architecture layered atop Miri, it is not feasible to ensure that changes such as the one that injected issue#99923 go through a future-incompat warning cycle. The compiler team plans to keep our eye on issues in this space. If we see evidence that these kinds of changes do cause breakage to a non-trivial number of crates, then we will investigate further how we might smooth the transition path between compiler releases. However, we need to balance any such goal against the fact that Miri has very a limited set of developers: the researchers determining how to define the semantics of unsafe languages like Rust. We do not want to slow their work down!
If you observe the could not evaluate static initializer
message on your crate atop Rust 1.64, and it was compiling with previous versions of Rust, we want you to let us know: file an issue!
We have performed a crater run for the 1.64-beta and that did not find any other instances of this particular problem. If you can test compiling your crate atop the 1.64-beta before the stable release goes out on September 22nd, all the better! One easy way to try the beta is to use rustup's override shortand for it:
$ rustup update beta
$ cargo +beta build
As Rust's const-eval evolves, we may see another case like this arise again. If you want to defend against future instances of const-eval UB, we recommend that you set up a continuous integration service to invoke the nightly rustc
with the unstable -Z extra-const-ub-checks
flag on your code.
As you might imagine, a lot of us are pretty interested in questions such as "what should be undefined behavior?"
See for example Ralf Jung's excellent blog series on why pointers are complicated (parts I, II, III), which contain some of the details elided above about Miri's representation, and spell out reasons why you might want to be concerned about pointer-to-usize transmutes even _outside_of const-eval.
If you are interested in trying to help us figure out answers to those kinds of questions, please join us in the unsafe code guidelines zulip.
If you are interested in learning more about Miri, or contributing to it, you can say Hello in the miri zulip.
To sum it all up: When you write safe Rust, then the compiler is responsible for preventing undefined behavior. When you write any unsafe code, you are responsible for preventing undefined behavior. Rust's const-eval system has a stricter set of rules governing what unsafe code has defined behavior: specifically, reinterpreting (aka "transmuting") a pointer value as a usize
is undefined behavior during const-eval. If you have undefined behavior at const-eval time, there is no guarantee that your code will be accepted from one compiler version to another.
The compiler team is hoping that issue #99923 is an exceptional fluke and that the 1.64 stable release will not encounter any other surprises related to the aforementioned change to the const-eval machinery.
But fluke or not, the issue provided excellent motivation to spend some time exploring facets of Rust's const-eval architecture, and the Miri interpreter that underlies it. We hope you enjoyed reading this as much as we did writing it.
This is a cross-post of the official security advisory. The official advisory contains a signed version with our PGP key, as well.
The Rust Security Response WG was notified that Cargo did not prevent extracting some malformed packages downloaded from alternate registries. An attacker able to upload packages to an alternate registry could fill the filesystem or corrupt arbitary files when Cargo downloaded the package.
These issues have been assigned CVE-2022-36113 and CVE-2022-36114. The severity of these vulnerabilities is "low" for users of alternate registries. Users relying on crates.io are not affected.
Note that by design Cargo allows code execution at build time, due to build scripts and procedural macros. The vulnerabilities in this advisory allow performing a subset of the possible damage in a harder to track down way. Your dependencies must still be trusted if you want to be protected from attacks, as it's possible to perform the same attacks with build scripts and procedural macros.
After a package is downloaded, Cargo extracts its source code in the ~/.cargo
folder on disk, making it available to the Rust projects it builds. To record when an extraction is successfull, Cargo writes "ok" to the .cargo-ok
file at the root of the extracted source code once it extracted all the files.
It was discovered that Cargo allowed packages to contain a .cargo-ok
symbolic link, which Cargo would extract. Then, when Cargo attempted to write "ok" into .cargo-ok
, it would actually replace the first two bytes of the file the symlink pointed to with ok
. This would allow an attacker to corrupt one file on the machine using Cargo to extract the package.
It was discovered that Cargo did not limit the amount of data extracted from compressed archives. An attacker could upload to an alternate registry a specially crafted package that extracts way more data than its size (also known as a "zip bomb"), exhausting the disk space on the machine using Cargo to download the package.
Both vulnerabilities are present in all versions of Cargo. Rust 1.64, to be released on September 22nd, will include fixes for both of them.
Since these vulnerabilities are just a more limited way to accomplish what a malicious build scripts or procedural macros can do, we decided not to publish Rust point releases backporting the security fix. Patch files for Rust 1.63.0 are available in the wg-security-response repository for people building their own toolchains.
We recommend users of alternate registries to excercise care in which package they download, by only including trusted dependencies in their projects. Please note that even with these vulnerabilities fixed, by design Cargo allows arbitrary code execution at build time thanks to build scripts and procedural macros: a malicious dependency will be able to cause damage regardless of these vulnerabilities.
crates.io implemented server-side checks to reject these kinds of packages years ago, and there are no packages on crates.io exploiting these vulnerabilities. crates.io users still need to excercise care in choosing their dependencies though, as the same concerns about build scripts and procedural macros apply here.
We want to thank Ori Hollander from JFrog Security Research for responsibly disclosing this to us according to the Rust security policy.
We also want to thank Josh Triplett for developing the fixes, Weihang Lo for developing the tests, and Pietro Albini for writing this advisory. The disclosure was coordinated by Pietro Albini and Josh Stone.
The Rust team is happy to announce a new version of Rust, 1.63.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.63.0 with:
rustup update stable
If you don't have it already, you can get rustupfrom the appropriate page on our website, and check out thedetailed release notes for 1.63.0 on GitHub.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Please report any bugs you might come across!
Rust code could launch new threads with std::thread::spawn
since 1.0, but this function bounds its closure with 'static
. Roughly, this means that threads currently must have ownership of any arguments passed into their closure; you can't pass borrowed data into a thread. In cases where the threads are expected to exit by the end of the function (by being join()
'd), this isn't strictly necessary and can require workarounds like placing the data in an Arc.
Now, with 1.63.0, the standard library is adding scoped threads, which allow spawning a thread borrowing from the local stack frame. Thestd::thread::scope API provides the necessary guarantee that any spawned threads will have exited prior to itself returning, which allows for safely borrowing data. Here's an example:
let mut a = vec![1, 2, 3];
let mut x = 0;
std::thread::scope(|s| {
s.spawn(|| {
println!("hello from the first scoped thread");
// We can borrow `a` here.
dbg!(&a);
});
s.spawn(|| {
println!("hello from the second scoped thread");
// We can even mutably borrow `x` here,
// because no other threads are using it.
x += a[0] + a[2];
});
println!("hello from the main thread");
});
// After the scope, we can modify and access our variables again:
a.push(4);
assert_eq!(x, a.len());
Previously, Rust code working with platform APIs taking raw file descriptors (on unix-style platforms) or handles (on Windows) would typically work directly with a platform-specific representation of the descriptor (for example, a c_int
, or the alias RawFd
). For Rust bindings to such native APIs, the type system then failed to encode whether the API would take ownership of the file descriptor (e.g., close
) or merely borrow it (e.g., dup
).
Now, Rust provides wrapper types such as BorrowedFd and OwnedFd, which are marked as#[repr(transparent)]
, meaning that extern "C"
bindings can directly take these types to encode the ownership semantics. See the stabilized APIs section for the full list of wrapper types stabilized in 1.63, currently, they are available on cfg(unix) platforms, Windows, and WASI.
We recommend that new APIs use these types instead of the previous type aliases (like RawFd).
const
Mutex, RwLock, Condvar initializationThe Condvar::new, Mutex::new, and RwLock::new functions are now callable in const
contexts, which allows avoiding the use of crates likelazy_static
for creating global statics with Mutex
, RwLock
, or Condvar
values. This builds on the work in 1.62 to enable thinner and faster mutexes on Linux.
impl Trait
For a function signature like fn foo<T>(value: T, f: impl Copy)
, it was an error to specify the concrete type of T
via turbofish: foo::<u32>(3, 3)
would fail with:
error[E0632]: cannot provide explicit generic arguments when `impl Trait` is used in argument position
--> src/lib.rs:4:11
|
4 | foo::<u32>(3, 3);
| ^^^ explicit generic argument not allowed
|
= note: see issue #83701 <https://github.com/rust-lang/rust/issues/83701> for more information
In 1.63, this restriction is relaxed, and the explicit type of the generic can be specified. However, the impl Trait
parameter, despite desugaring to a generic, remains opaque and cannot be specified via turbofish.
As detailed in this blog post, we've fully removed the previous lexical borrow checker from rustc across all editions, fully enabling the non-lexical, new, version of the borrow checker. Since the borrow checker doesn't affect the output of rustc, this won't change the behavior of any programs, but it completes a long-running migration (started in the initial stabilization of NLL for the 2018 edition) to deliver the full benefits of the new borrow checker across all editions of Rust. For most users, this change will bring slightly better diagnostics for some borrow checking errors, but will not otherwise impact which code they can write.
You can read more about non-lexical lifetimes in this section of the 2018 edition announcement.
The following methods and trait implementations are now stabilized:
These APIs are now usable in const contexts:
There are other changes in the Rust 1.63.0 release. Check out what changed inRust,Cargo, and Clippy.
Many people came together to create Rust 1.63.0. We couldn't have done it without all of you.Thanks!
As of Rust 1.63 (releasing next week), the "non-lexical lifetimes" (NLL) work will be enabled by default. NLL is the second iteration of Rust's borrow checker. The RFC actually does quite a nice job of highlighting some of the motivating examples. "But," I hear you saying, "wasn't NLL included in Rust 2018?" And yes, yes it was! But at that time, NLL was only enabled for Rust 2018 code, while Rust 2015 code ran in "migration mode". When in "migration mode," the compiler would run both the old and the new borrow checker and compare the results. This way, we could give warnings for older code that should never have compiled in the first place; we could also limit the impact of any bugs in the new code. Over time, we have limited migration mode to be closer and closer to just running the new-style borrow checker: in the next release, that process completes, and all Rust code will be checked with NLL.
At this point, we have almost completely merged "migration mode" and "regular mode", so switching to NLL will have very little impact on the user experience. A number of diagnostics changed, mostly for the better -- Jack Huey gives the full details in his blog post.
The work to remove the old borrow checker has been going on for years. It's been a long, tedious, and largely thankless process. We'd like to take a moment to highlight the various people involved and make sure they are recognized for their hard work:
Jack's blog post includes a detailed narrative of all the work involved if you'd like more details! It's a fun read.
The next frontier for Rust borrow checking is taking the polonius project and moving it from research experiment to production code. Polonius is a next-generation version of the borrow checker that was "spun off" from the main NLL effort in 2018, as we were getting NLL ready to ship in production. Its most important contribution is fixing a known limitation of the borrow checker, demonstrated by the following example:
fn last_or_push<'a>(vec: &'a mut Vec<String>) -> &'a String {
if let Some(s) = vec.last() { // borrows vec
// returning s here forces vec to be borrowed
// for the rest of the function, even though it
// shouldn't have to be
return s;
}
// Because vec is borrowed, this call to vec.push gives
// an error!
vec.push("".to_string()); // ERROR
vec.last().unwrap()
}
This example doesn't compile today (try it for yourself), though there's not a good reason for that. You can often workaround the problem by editing the code to introduce a redundant if (as shown in this example), but with polonius, it will compile as is. If you'd like to learn more about how polonius (and the existing borrow checker) works1, you can watch my talk from Rust Belt Rust.
The minimum requirements for Rust toolchains targeting Linux will increase with the Rust 1.64.0 release (slated for September 22nd, 2022). The new minimum requirements are:
These requirements apply both to running the Rust compiler itself (and other Rust tooling like Cargo or Rustup), and to running binaries produced by Rust, if they use the libstd.
If you are not targeting an old long-term-support distribution, or embedded hardware running an old Linux version, this change is unlikely to affect you. Otherwise, read on!
In principle, the new kernel requirements affect all *-linux-*
targets, while the glibc requirements affect all *-linux-gnu*
targets. In practice, many targets were already requiring newer kernel or glibc versions. The requirements for such targets do not change.
Among targets for which a Rust host toolchain is distributed, the following are affected:
i686-unknown-linux-gnu
(Tier 1)x86_64-unknown-linux-gnu
(Tier 1)x86_64-unknown-linux-musl
(Tier 2 with host tools)powerpc-unknown-linux-gnu
(Tier 2 with host tools)powerpc64-unknown-linux-gnu
(Tier 2 with host tools)s390x-unknown-linux-gnu
(Tier 2 with host tools)The following are not affected, because they already had higher glibc/kernel requirements:
aarch64-unknown-linux-gnu
(Tier 1)aarch64-unknown-linux-musl
(Tier 2 with host tools)arm-unknown-linux-gnueabi
(Tier 2 with host tools)arm-unknown-linux-gnueabihf
(Tier 2 with host tools)armv7-unknown-linux-gnueabihf
(Tier 2 with host tools)mips-unknown-linux-gnueabihf
(Tier 2 with host tools)powerpc64le-unknown-linux-gnueabihf
(Tier 2 with host tools)riscv64gc-unknown-linux-gnueabihf
(Tier 2 with host tools)For other tier 2 or tier 3 targets, for which no Rust toolchain is distributed, we do not accurately track minimum requirements, and they may or may not be affected by this change.*-linux-musl*
targets are only affected by the kernel requirements, not the glibc requirements. Targets which only use libcore and not libstd are unaffected.
A list of supported targets and their requirements can be found on theplatform support page. However, the page is not yet up to date with the changes announced here.
The glibc and kernel versions used for the new baseline requirements are already close to a decade old. As such, this change should only affect users that either target old long-term-support Linux distributions, or embedded hardware running old versions of Linux.
The following Linux distributions are still supported under the new requirements:
The following distributions are not supported under the new requirements:
Out of the distributions in the second list, only RHEL 6 still has limited vendor support (ELS).
We want Rust, and binaries produced by Rust, to be as widely usable as possible. At the same time, the Rust project only has limited resources to maintain compatibility with old environments.
There are two parts to the toolchain requirements: The minimum requirements for running the Rust compiler on a host system, and the minimum requirements for cross-compiled binaries.
The minimum requirements for host toolchains affect our build system. Rust CI produces binary artifacts for dozens of different targets. Creating binaries that support old glibc versions requires either building on an operating system with old glibc (for native builds) or using a buildroot with an old glibc version (for cross-compiled builds).
At the same time, Rust relies on LLVM for optimization and code generation, which regularly increases its toolchain requirements. LLVM 16 will require GCC 7.1 or newer (and LLVM 15 supports GCC 5.1 in name only). Creating a build environment that has both a very old glibc and a recent compiler becomes increasingly hard over time. crosstool-ng (which we use for most cross-compilation needs) does not support targeting both glibc 2.11, and using a compiler that satisfies the new LLVM requirements.
The requirements for cross-compiled binaries have a different motivation: They affect which kernel versions need to be supported by libstd. Increasing the kernel requirements allows libstd to use newer syscalls, without having to maintain and test compatibility with kernels that do not support them.
The new baseline requirements were picked as the least common denominator among long-term-support distributions that still have active support. This is currently RHEL 7 with glibc 2.17 and kernel 3.10. The kernel requirement is picked as 3.2 instead, because this is the minimum requirement of glibc itself, and there is little relevant API difference between these versions.
If you or your organization are affected by this change, there are a number of viable options depending on your situation:
The Rust team has published a new point release of Rust, 1.62.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.62.1 with:
rustup update stable
If you don't have it already, you can get rustupfrom the appropriate page on our website, and check out thedetailed release notes for 1.62.1 on GitHub.
Rust 1.62.1 addresses a few recent regressions in the compiler and standard library, and also mitigates a CPU vulnerability on Intel SGX.
Many people came together to create Rust 1.62.1. We couldn't have done it without all of you. Thanks!
We want to say farewell and thanks to a couple of people who are stepping back from the Core Team:
Many thanks to both of them for their contributions and we look forward to seeing their future efforts with Rust!
The rustup working group is announcing the release of rustup version 1.25.1.Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of rustup installed, getting rustup 1.25.1 is as easy as stopping any programs which may be using Rustup (e.g. closing your IDE) and running:
rustup self update
Rustup will also automatically update itself at the end of a normal toolchain update:
rustup update
If you don't have it already, you can get rustup from the appropriate page on our website.
This version of rustup fixes a regression introduced in the previous release (1.25.0), which caused some workflows to fail.
When you invoke Rust or Cargo installed by rustup, you're not running them directly. Instead, you run rustup "proxy" binaries, whose job is to detect the right toolchain (parsing the +channel
CLI argument or using one of the defaults) and run it.
Running these proxies is not instantaneous though, and for example a cargo build
invocation might execute several of them (the initial cargo
invocation plus one rustc
for every dependency), slowing down the build.
To improve performance, rustup 1.25.0 changed the proxies code to set theRUSTC and RUSTDOC environment variables when missing, which instructed Cargo to skip the proxies and invoke the binaries defined in those variables directly. This provided a performance gain when building crates with lots of dependencies.
Unfortunately this change broke some users of rustup, who did something like:
foo
, setting the RUSTC
and RUSTDOC
environment variables pointing to that toolchain.bar
(for example cargo +bar build
). This does not set the RUSTC
andRUSTDOC
environment variables pointing to bar
, as those variables are already present.RUSTC
environment variable and skipping the proxy, which results in the foo
toolchain being invoked. Previous versions of rustup invoked the proxy instead, which would correctly detect and use the bar
toolchain.Rustup 1.25.1 fixes this regression by reverting the change. The rustup working group is discussing in issue #3035 plans to re-introduce the change in a future release while avoiding breakage.
Thanks again to all the contributors who made rustup 1.25.1 possible!
The rustup working group is happy to announce the release of rustup version 1.25.0. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of rustup installed, getting rustup 1.25.0 is as easy as stopping any programs which may be using Rustup (e.g. closing your IDE) and running:
rustup self update
Rustup will also automatically update itself at the end of a normal toolchain update:
rustup update
If you don't have it already, you can get rustup from the appropriate page on our website.
This version of Rustup involves a significant number of internal cleanups, both in terms of the Rustup code and its documentation. In addition to a lot of work on the codebase itself, due to the length of time since the last release this one has a record number of contributors and we thank you all for your efforts and time.
One of the biggest changes in 1.25.0 is the new offer on Windows installs to auto-install the Visual Studio 2022 compilers which should simplify the process of getting started for people not used to developing on Windows with the MSVC-compatible toolchains.
A second important change for 1.25.0 is a number of PRs focussed around startup performance for Rustup. While it may not seem all that important to many, Rustup's startup time is a factor in the time it takes to do builds which involve large numbers of crates on systems which do not have large numbers of CPU cores. Hopefully the people for whom this is a common activity will notice an improvement; though there's more opportunity to speed things up still available.
Some, but by no means all, of the rest of this release's highlights includes support forrustup default none
to unset the default toolchain, support for Windows arm64, inclusion of rust-gdbgui
as a proxy so that platforms which support it can use GDB's gui mode with Rust, and some improvements to rustup-init.sh
.
Full details are available in the changelog!
Rustup's documentation is also available in the rustup book.
Thanks again to all the contributors who made rustup 1.25.0 possible!
The Rust Language Server (RLS) is being deprecated in favor of rust-analyzer. Current users of RLS should migrate to using rust-analyzer instead. Builds of RLS will continue to be released until at least the Rust 1.64 release (2022-09-22), after which no new releases will be made. This timeline may change if any issues arise.
RLS is an implementation of the Language Server Protocol (LSP) which provides enhanced features with any editor that supports the protocol, such as code-checking and refactoring. RLS was introduced by RFC 1317 and development was very active from 2016 through 2019. However, the architecture of RLS has several limitations that can make it difficult to provide low-latency and high-quality responses needed for an interactive environment.
Development of rust-analyzer began near the beginning of 2018 to provide an alternate LSP implementation for Rust. rust-analyzer uses a fundamentally different approach that does not rely on using rustc
. In RFC 2912 rust-analyzer was adopted as the official replacement for RLS.
How you migrate to rust-analyzer will depend on which editor you are using. If you are using VSCode, you should uninstall the rust-lang.rust
extension and install the official rust-lang.rust-analyzer extension. For other editors, please consult the rust-analyzer manual for instructions on how to install it.
Should you have any issues migrating to rust-analyzer, the Editors and IDEs category on the Rust Users forum is available for help with installation and usage.
We will soon be marking the official rust-lang.rust
VSCode extension as deprecated, and will be implementing notifications that will inform users about the transition. After the end of release builds of RLS, we plan to replace the rls
executable in official Rust releases with a small LSP implementation that informs the user that RLS is no longer available.
We would like to thank everyone who has worked on RLS and rust-analyzer. These options would not exist without the tremendous effort of all the contributors to these projects.
The Rust team is happy to announce a new version of Rust, 1.62.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.62.0 with:
rustup update stable
If you don't have it already, you can get rustupfrom the appropriate page on our website, and check out thedetailed release notes for 1.62.0 on GitHub.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Please report any bugs you might come across!
cargo add
You can now add new dependencies directly from the command line using cargo add
. This command supports specifying features and versions. It can also be used to modify existing dependencies.
For example:
cargo add log
cargo add serde --features derive
cargo add nom@5
See the cargo documentation for more.
#[default]
enum variantsYou can now use #[derive(Default)]
on enums if you specify a default variant. For example, until now you would have to manually write a Default
impl for this enum:
#[derive(Default)]
enum Maybe<T> {
#[default]
Nothing,
Something(T),
}
As of now only "unit" variants (variants that have no fields) are allowed to be marked #[default]
. More information is available in the RFC for this feature.
Previously, Mutex
, Condvar
, and RwLock
were backed by the pthreads library on Linux. The pthreads locks support more features than the Rust APIs themselves do, including runtime configuration, and are designed to be used in languages with fewer static guarantees than Rust provides.
The mutex implementation, for example, is 40 bytes and cannot be moved. This forced the standard library to allocate a Box
behind the scenes for each new mutex for platforms that use pthreads.
Rust's standard library now ships with a raw futex-based implementation of these locks on Linux, which is very lightweight and doesn't require extra allocation. In 1.62.0 Mutex
only needs 5 bytes for its internal state on Linux, though this may change in future versions.
This is part of a long effort to improve the efficiency of Rust's lock types, which includes previous improvements on Windows such as unboxing its primitives. You can read more about that effort in the tracking issue.
x86_64
targetIt's now easier to build OS-less binaries for x86_64
, for example when writing a kernel. The x86_64-unknown-none target has been promoted to Tier 2 and can be installed with rustup.
rustup target add x86_64-unknown-none
rustc --target x86_64-unknown-none my_no_std_program.rs
You can read more about development using no_std
in the Embedded Rust book.
The following methods and trait implementations are now stabilized:
There are other changes in the Rust 1.62.0 release. Check out what changed inRust,Cargo, and Clippy.
Many people came together to create Rust 1.62.0. We couldn't have done it without all of you.Thanks!
Hello Rust community!
We're excited to announce that the Rust project teams will be hosting an unconference1 the day after RustConf.
The PostConf Unconf will be dedicated to the Rust project and will be a fantastic opportunity for users, contributors, and maintainers to network and discuss the project's development.
There will be no set agenda; instead, attendees will decide what will be discussed together and can move freely between sessions to find ones in which they can contribute most effectively based on their individual interests and needs.
To get the most out of the unconference, jot down your thoughts ahead of time and bring them ready to share. We will also set up a channel in the RustConf Discord for folks to communicate and make preliminary, informal plans.
If you plan to attend, please register as soon as possible to help us plan appropriately. If space is limited, project participants and conference attendees will be given preference. Registration is free and open to everyone, but we require either a RustConf registration ID or a $10 deposit at signup to ensure that registrations are an accurate approximation of participants.
We hope to see you there!
The Cargo nightly sparse-registry feature is ready for testing. The feature causes Cargo to access the crates.io index over HTTP, rather than git. It can provide a significant performance improvement, especially if the local copy of the git index is out-of-date or not yet cloned.
To try it out, add the -Z sparse-registry
flag on a recent nightly build of Cargo. For example, to update dependencies:
rustup update nightly
cargo +nightly -Z sparse-registry update
The feature can also be enabled by setting the environment variableCARGO_UNSTABLE_SPARSE_REGISTRY=true
. Setting this variable will have no effect on stable Cargo, making it easy to opt-in for CI jobs.
The minimum Cargo version is cargo 2022-06-17
, which is bundled with rustc 2022-06-20
.
You can leave feedback on the internals thread.
If you see any issues please report them on the Cargo repo. The output of Cargo with the environment variable CARGO_LOG=cargo::sources::registry::http_remote=trace
set will be helpful in debugging.
Accessing the index over HTTP allows crates.io to continue growing without hampering performance. The current git index continues to grow as new crates are published, and clients must download the entire index. The HTTP index only requires downloading metadata for crates in your dependency tree.
The performance improvement for clients should be especially noticeable in CI environments, particularly if no local cache of the index exists.
On the server side, the HTTP protocol is much simpler to cache on a CDN, which improves scalability and reduces server load. Due to this caching, crate updates may take an extra minute to appear in the index.
The Cargo team plans to eventually make this the default way to access crates.io (though the git index will remain for compatibility with older versions of Cargo and external tools). Cargo.lock
files will continue to reference the existing crates.io index on GitHub to avoid churn.
The -Z sparse-registry
flag also enables alternative registries to be accessed over HTTP. For more details, see the tracking issue.
This project has been in the works for over 2.5 years with collaboration from the crates.io, infra, and Cargo teams.
@kornelski wrote the sparse-index RFC and initial performance proof of concept. @jonhoo created the initial implementation in Cargo and gathered performance data. @arlosicompleted the implementation in Cargo and implemented the changes to crates.io to serve the index. @eh2406 provided numerous reviews and feedback to get all the changes landed. Many others from the community helped by providing suggestions, feedback, and testing.
Thank you to everyone involved!
The Rust team is happy to announce a new version of Rust, 1.61.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.61.0 with:
rustup update stable
If you don't have it already, you can get rustupfrom the appropriate page on our website, and check out thedetailed release notes for 1.61.0 on GitHub.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Please report any bugs you might come across!
main
In the beginning, Rust main
functions could only return the unit type ()
(either implicitly or explicitly), always indicating success in the exit status, and if you wanted otherwise you had to call process::exit(code)
. Since Rust 1.26, main
has been allowed to return a Result
, where Ok
translated to a C EXIT_SUCCESS
andErr
to EXIT_FAILURE
(also debug-printing the error). Under the hood, these alternate return types were unified by an unstable Termination
trait.
In this release, that Termination
trait is finally stable, along with a more general ExitCode
type that wraps platform-specific return types. That has SUCCESS
and FAILURE
constants, and also implements From<u8>
for more arbitrary values. The Termination
trait can also be implemented for your own types, allowing you to customize any kind of reporting before converting to an ExitCode
.
For example, here's a type-safe way to write exit codes for a git bisect run script:
use std::process::{ExitCode, Termination};
#[repr(u8)]
pub enum GitBisectResult {
Good = 0,
Bad = 1,
Skip = 125,
Abort = 255,
}
impl Termination for GitBisectResult {
fn report(self) -> ExitCode {
// Maybe print a message here
ExitCode::from(self as u8)
}
}
fn main() -> GitBisectResult {
std::panic::catch_unwind(|| {
todo!("test the commit")
}).unwrap_or(GitBisectResult::Abort)
}
const fn
Several incremental features have been stabilized in this release to enable more functionality inconst
functions:
fn
pointers: You can now create, pass, and cast function pointers in aconst fn
. For example, this could be useful to build compile-time function tables for an interpreter. However, it is still not permitted to call fn
pointers.const fn
, such asT: Copy
, where previously only Sized
was allowed.dyn Trait
types: Similarly, const fn
can now deal with trait objects, dyn Trait
.impl Trait
types: Arguments and return values for const fn
can now be opaque impl Trait
types.Note that the trait features do not yet support calling methods from those traits in a const fn
.
See the Constant Evaluation section of the reference book to learn more about the current capabilities of const
contexts, and future capabilities can be tracked in rust#57563.
The three standard I/O streams -- Stdin
, Stdout
, and Stderr
-- each have a lock(&self)
to allow more control over synchronizing read and writes. However, they returned lock guards with a lifetime borrowed from &self
, so they were limited to the scope of the original handle. This was determined to be an unnecessary limitation, since the underlying locks were actually in static storage, so now the guards are returned with a 'static
lifetime, disconnected from the handle.
For example, a common error came from trying to get a handle and lock it in one statement:
// error[E0716]: temporary value dropped while borrowed
let out = std::io::stdout().lock();
// ^^^^^^^^^^^^^^^^^ - temporary value is freed at the end of this statement
// |
// creates a temporary which is freed while still in use
Now the lock guard is 'static
, not borrowing from that temporary, so this works!
The following methods and trait implementations are now stabilized:
The following previously stable functions are now const
:
There are other changes in the Rust 1.61.0 release. Check out what changed inRust,Cargo, and Clippy.
In a future release we're planning to increase the baseline requirements for the Linux kernel to version 3.2, and for glibc to version 2.17. We'd love your feedback in rust#95026.
Many people came together to create Rust 1.61.0. We couldn't have done it without all of you.Thanks!
This is a cross-post of the official security advisory. The official advisory contains a signed version with our PGP key, as well.
The Rust Security Response WG and the crates.io team were notified on 2022-05-02 of the existence of the malicious crate rustdecimal
, which contained malware. The crate name was intentionally similar to the name of the popular rust_decimal crate, hoping that potential victims would misspell its name (an attack called "typosquatting").
To protect the security of the ecosystem, the crates.io team permanently removed the crate from the registry as soon as it was made aware of the malware. An analysis of all the crates on crates.io was also performed, and no other crate with similar code patterns was found.
Keep in mind that the rust_decimal crate was not compromised, and it is still safe to use.
The crate had less than 500 downloads since its first release on 2022-03-25, and no crates on the crates.io registry depended on it.
The crate contained identical source code and functionality as the legitrust_decimal
crate, except for the Decimal::new
function.
When the function was called, it checked whether the GITLAB_CI
environment variable was set, and if so it downloaded a binary payload into/tmp/git-updater.bin
and executed it. The binary payload supported both Linux and macOS, but not Windows.
An analysis of the binary payload was not possible, as the download URL didn't work anymore when the analysis was performed.
If your project or organization is running GitLab CI, we strongly recommend checking whether your project or one of its dependencies depended on therustdecimal
crate, starting from 2022-03-25. If you notice a dependency on that crate, you should consider your CI environment to be compromised.
In general, we recommend regularly auditing your dependencies, and only depending on crates whose author you trust. If you notice any suspicious behavior in a crate's source code please follow the Rust security policy and report it to the Rust Security Response WG.
We want to thank GitHub user @safinaskar for identifying the malicious crate in this GitHub issue.
The Rust team is happy to announce a new version of Rust, 1.60.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.60.0 with:
rustup update stable
If you don't have it already, you can get rustup
from the appropriate page on our website, and check out thedetailed release notes for 1.60.0 on GitHub.
If you'd like to help us out by testing future releases, you might consider updating locally to use
the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Please report any bugs you might come across!
Support for LLVM-based coverage instrumentation has been stabilized in rustc. You can try this out on your code by rebuilding your code with -Cinstrument-coverage
, for example like this:
RUSTFLAGS="-C instrument-coverage" cargo build
After that, you can run the resulting binary, which will produce adefault.profraw
file in the current directory. (The path and filename can be
overriden by an environment variable; seedocumentationfor details).
The llvm-tools-preview
component includes llvm-profdata
for processing and
merging raw profile output (coverage region execution counts); and llvm-cov
for report generation. llvm-cov
combines the processed output, fromllvm-profdata
, and the binary itself, because the binary embeds a mapping from
counters to actual source code regions.
rustup component add llvm-tools-preview
$(rustc --print sysroot)/lib/rustlib/x86_64-unknown-linux-gnu/bin/llvm-profdata merge -sparse default.profraw -o default.profdata
$(rustc --print sysroot)/lib/rustlib/x86_64-unknown-linux-gnu/bin/llvm-cov show -Xdemangler=rustfilt target/debug/coverage-testing \
-instr-profile=default.profdata \
-show-line-counts-or-regions \
-show-instantiations
The above commands on a simple helloworld binary produce this annotated report, showing that each line of the input was covered.
1| 1|fn main() {
2| 1| println!("Hello, world!");
3| 1|}
For more details, please read thedocumentation in the
rustc book. The baseline functionality is stable and will exist in some form
in all future Rust releases, but the specific output format and LLVM tooling which
produces it are subject to change. For this reason, it is important to make
sure that you use the same version for both the llvm-tools-preview
and the
rustc binary used to compile your code.
cargo --timings
Cargo has stabilized support for collecting information on build with the --timings
flag.
$ cargo build --timings
Compiling hello-world v0.1.0 (hello-world)
Timing report saved to target/cargo-timings/cargo-timing-20220318T174818Z.html
Finished dev [unoptimized + debuginfo] target(s) in 0.98s
The report is also copied to target/cargo-timings/cargo-timing.html
. A report on the release build of Cargo has been put up here. These reports can be useful for improving build performance.
More information about the timing reports may be found in the documentation.
This release introduces two new changes to improve support for Cargo features and how they interact with optional dependencies: Namespaced dependencies and weak dependency features.
Cargo has long supported features along with optional dependencies, as illustrated by the snippet below.
[dependencies]
jpeg-decoder = { version = "0.1.20", default-features = false, optional = true }
[features]
# Enables parallel processing support by enabling the "rayon" feature of jpeg-decoder.
parallel = ["jpeg-decoder/rayon"]
There are two things to note in this example:
jpeg-decoder
implicitly defines a feature of the same name. Enabling the jpeg-decoder
feature will enable the jpeg-decoder
dependency."jpeg-decoder/rayon"
syntax enables the jpeg-decoder
dependency and enables the jpeg-decoder
dependency's rayon
feature.Namespaced features tackles the first issue. You can now use the dep:
prefix in the [features]
table to explicitly refer to an optional dependency without implicitly exposing it as a feature. This gives you more control on how to define the feature corresponding to the optional dependency including hiding optional dependencies behind more descriptive feature names.
Weak dependency features tackle the second issue where the "optional-dependency/feature-name"
syntax would always enable optional-dependency
. However, often you want to enable the feature on the optional dependency only if some other feature has enabled the optional dependency. Starting in 1.60, you can add a ? as in "package-name?/feature-name"
which will only enable the given feature if something else has enabled the optional dependency.
For example, let's say we have added some serialization support to our library, and it requires enabling a corresponding feature in some optional dependencies. That can be done like this:
[dependencies]
serde = { version = "1.0.133", optional = true }
rgb = { version = "0.8.25", optional = true }
[features]
serde = ["dep:serde", "rgb?/serde"]
In this example, enabling the serde feature will enable the serde dependency. It will also enable the serde feature for the rgb dependency, but only if something else has enabled the rgb dependency.
Incremental compilation is re-enabled for the 1.60 release. The Rust team continues to work on fixing bugs in incremental, but no problems causing widespread breakage are known at this time, so we have chosen to reenable incremental compilation. Additionally, the compiler team is continuing to work on long-term strategy to avoid future problems of this kind. That process is in relatively early days, so we don't have anything to share yet on that front.
Instant
monotonicity guaranteesOn all platforms Instant
will try to use an OS API that guarantees monotonic
behavior if available (which is the case on all tier 1 platforms). In practice
such guarantees are -- under rare circumstances -- broken by hardware,
virtualization, or operating system bugs. To work around these bugs and platforms
not offering monotonic clocks, Instant::duration_since
, Instant::elapsed
andInstant::sub
now saturate to zero. In older Rust versions this led to a panic
instead. Instant::checked_duration_since
can be used to detect and handle
situations where monotonicity is violated, or Instant
s are subtracted in the
wrong order.
This workaround obscures programming errors where earlier and later instants are accidentally swapped. For this reason future Rust versions may reintroduce panics in at least those cases, if possible and efficient.
Prior to 1.60, the monotonicity guarantees were provided through mutexes or
atomics in std, which can introduce large performance overheads toInstant::now()
. Additionally, the panicking behavior meant that Rust software
could panic in a subset of environments, which was largely undesirable, as the
authors of that software may not be able to fix or upgrade the operating system,
hardware, or virtualization system they are running on. Further, introducing
unexpected panics into these environments made Rust software less reliable and
portable, which is of higher concern than exposing typically uninteresting
platform bugs in monotonic clock handling to end users.
The following methods and trait implementations are now stabilized:
Arc::new_cyclic
Rc::new_cyclic
slice::EscapeAscii
<[u8]>::escape_ascii
u8::escape_ascii
Vec::spare_capacity_mut
MaybeUninit::assume_init_drop
MaybeUninit::assume_init_read
i8::abs_diff
i16::abs_diff
i32::abs_diff
i64::abs_diff
i128::abs_diff
isize::abs_diff
u8::abs_diff
u16::abs_diff
u32::abs_diff
u64::abs_diff
u128::abs_diff
usize::abs_diff
Display for io::ErrorKind
From<u8> for ExitCode
Not for !
(the "never" type)Assign<$t> for Wrapping<$t>
arch::is_aarch64_feature_detected!
There are other changes in the Rust 1.60.0 release. Check out what changed inRust,Cargo, and Clippy.
Many people came together to create Rust 1.60.0. We couldn't have done it without all of you.Thanks!
This is a cross-post of the official security advisory. The official advisory contains a signed version with our PGP key, as well.
The Rust Security Response WG was notified that the regex
crate did not
properly limit the complexity of the regular expressions (regex) it parses. An
attacker could use this security issue to perform a denial of service, by
sending a specially crafted regex to a service accepting untrusted regexes. No
known vulnerability is present when parsing untrusted input with trusted
regexes.
This issue has been assigned CVE-2022-24713. The severity of this vulnerability
is "high" when the regex
crate is used to parse untrusted regexes. Other uses
of the regex
crate are not affected by this vulnerability.
The regex
crate features built-in mitigations to prevent denial of service
attacks caused by untrusted regexes, or untrusted input matched by trusted
regexes. Those (tunable) mitigations already provide sane defaults to prevent
attacks. This guarantee is documented and it's considered part of the crate's
API.
Unfortunately a bug was discovered in the mitigations designed to prevent untrusted regexes to take an arbitrary amount of time during parsing, and it's possible to craft regexes that bypass such mitigations. This makes it possible to perform denial of service attacks by sending specially crafted regexes to services accepting user-controlled, untrusted regexes.
All versions of the regex
crate before or equal to 1.5.4 are affected by this
issue. The fix is include starting from regex
1.5.5.
We recommend everyone accepting user-controlled regexes to upgrade immediately
to the latest version of the regex
crate.
Unfortunately there is no fixed set of problematic regexes, as there are practically infinite regexes that could be crafted to exploit this vulnerability. Because of this, we do not recommend denying known problematic regexes.
We want to thank Addison Crump for responsibly disclosing this to us according to the Rust security policy, and for helping review the fix.
We also want to thank Andrew Gallant for developing the fix, and Pietro Albini for coordinating the disclosure and writing this advisory.
The Rust team has published a new version of Rust, 1.59.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.
Today's release falls on the day in which the world's attention is captured by the sudden invasion of Ukraine by Putin's forces. Before going into the details of the new Rust release, we'd like to state that we stand in solidarity with the people of Ukraine and express our support for all people affected by this conflict.
If you have a previous version of Rust installed via rustup, you can get 1.59.0 with:
rustup update stable
If you don't have it already, you can get rustup
from the appropriate page on our website, and check out thedetailed release notes for 1.59.0 on GitHub.
The Rust language now supports inline assembly. This enables many applications that need very low-level control over their execution, or access to specialized machine instructions.
When compiling for x86-64 targets, for instance, you can now write:
use std::arch::asm;
// Multiply x by 6 using shifts and adds
let mut x: u64 = 4;
unsafe {
asm!(
"mov {tmp}, {x}",
"shl {tmp}, 1",
"shl {x}, 2",
"add {x}, {tmp}",
x = inout(reg) x,
tmp = out(reg) _,
);
}
assert_eq!(x, 4 * 6);
The format string syntax used to name registers in the asm!
and global_asm!
macros is the same used in Rust format strings, so it should feel quite familiar
to Rust programmers.
The assembly language and instructions available with inline assembly vary according to the target architecture. Today, the stable Rust compiler supports inline assembly on the following architectures:
You can see more examples of inline assembly in Rust By Example, and find more detailed documentation in the reference.
You can now use tuple, slice, and struct patterns as the left-hand side of an assignment.
let (a, b, c, d, e);
(a, b) = (1, 2);
[c, .., d, _] = [1, 2, 3, 4, 5];
Struct { e, .. } = Struct { e: 5, f: 3 };
assert_eq!([1, 2, 1, 4, 5], [a, b, c, d, e]);
This makes assignment more consistent with let
bindings, which have long
supported the same thing. Note that destructuring assignments with operators
such as +=
are not allowed.
Generic types can now specify default values for their const generics. For example, you can now write the following:
struct ArrayStorage<T, const N: usize = 2> {
arr: [T; N],
}
impl<T> ArrayStorage<T> {
fn new(a: T, b: T) -> ArrayStorage<T> {
ArrayStorage {
arr: [a, b],
}
}
}
Previously, type parameters were required to come before all const parameters. That restriction has been relaxed and you can now interleave them.
fn cartesian_product<
T, const N: usize,
U, const M: usize,
V, F
>(a: [T; N], b: [U; M]) -> [[V; N]; M]
where
F: FnMut(&T, &U) -> V
{
// ...
}
Sometimes bugs in the Rust compiler cause it to accept code that should not have been accepted. An example of this was borrows of packed structfields being allowed in safe code.
While this happens very rarely, it can be quite disruptive when a crate used by your project has code that will no longer be allowed. In fact, you might not notice until your project inexplicably stops building!
Cargo now shows you warnings when a dependency will be rejected by a future
version of Rust. After running cargo build
or cargo check
, you might see:
warning: the following packages contain code that will be rejected by a future version of Rust: old_dep v0.1.0
note: to see what the problems were, use the option `--future-incompat-report`, or run `cargo report future-incompatibilities --id 1`
You can run the cargo report
command mentioned in the warning to see a full
report of the code that will be rejected. This gives you time to upgrade your
dependency before it breaks your build.
It's often useful to strip unnecessary information like debuginfo from binaries you distribute, making them smaller.
While it has always been possible to do this manually after the binary is
created, cargo and rustc now support stripping when the binary is linked. To
enable this, add the following to your Cargo.toml
:
[profile.release]
strip = "debuginfo"
This causes debuginfo to be stripped from release binaries. You can also supply"symbols"
or just true
to strip all symbol information where supported.
The standard library typically ships with debug symbols and line-level
debuginfo, so Rust binaries built without debug symbols enabled still include
the debug information from the standard library by default. Using the strip
option allows you to remove this extra information, producing smaller Rust
binaries.
See Cargo's documentation for more details.
The 1.59.0 release disables incremental by default (unless explicitly asked for
by via an environment variable: RUSTC_FORCE_INCREMENTAL=1
). This mitigates
the effects of a known bug, #94124, which can cause deserialization errors (and panics) during compilation
with incremental compilation turned on.
The specific fix for #94124 has landed and is currently in the 1.60 beta, which will ship in six weeks. We are not presently aware of other issues that would encourage a decision to disable incremental in 1.60 stable, and if none arise it is likely that 1.60 stable will re-enable incremental compilation again. Incremental compilation remains on by default in the beta and nightly channels.
As always, we encourage users to test on the nightly and beta channels and report issues you find: particularly for incremental bugs, this is the best way to ensure the Rust team can judge whether there is breakage and the number of users it affects.
The following methods and trait implementations are now stabilized:
std::thread::available_parallelism
Result::copied
Result::cloned
arch::asm!
arch::global_asm!
ops::ControlFlow::is_break
ops::ControlFlow::is_continue
TryFrom<char> for u8
char::TryFromCharError
implementing Clone
, Debug
, Display
, PartialEq
, Copy
, Eq
, Error
iter::zip
NonZeroU8::is_power_of_two
NonZeroU16::is_power_of_two
NonZeroU32::is_power_of_two
NonZeroU64::is_power_of_two
NonZeroU128::is_power_of_two
DoubleEndedIterator for ToLowercase
DoubleEndedIterator for ToUppercase
TryFrom<&mut [T]> for [T; N]
UnwindSafe for Once
RefUnwindSafe for Once
The following previously stable functions are now const
:
mem::MaybeUninit::as_ptr
mem::MaybeUninit::assume_init
mem::MaybeUninit::assume_init_ref
ffi::CStr::from_bytes_with_nul_unchecked
There are other changes in the Rust 1.59.0 release. Check out what changed inRust,Cargo, and Clippy.
Many people came together to create Rust 1.59.0. We couldn't have done it without all of you.Thanks!
We have an exciting announcement to make! The rust-analyzer project, a new implementation of the Language Server Protocol (LSP) for Rust, is now officially a part of the wider Rust organization! 🎉
We want to start by thanking everyone who has gotten us this far, from contributors, to sponsors, to all the users of rust-analyzer in the Rust community. We could not have done this without you.
The immediate impact of this organizational change is limited -- nothing changes for rust-analyzer users or contributors. However, this change unblocks technical work to make rust-analyzer the officially recommended language server for Rust in the near future.
If you were hesitant to try rust-analyzer before, today is a good opportunity to do so. Not only is it a very capable language server for Rust, but according to VS Code statistics, it is one of the best rated LSP implementations across programming languages. We highly recommend giving rust-analyzer a spin today, even if it will take some more time for us to complete the due process and switch from the existing officially recommended LSP implementation (RLS) properly.
rust-analyzer enjoys excellent support in many editors:
For other editors, check the manual.
Finally, if you are using IntelliJ-Platform based IDEs like CLion, IDEA or PyCharm, you don't need rust-analyzer. You should use the awesome IntelliJ Rust plugin by JetBrains.
The rust-analyzer project was started at the very end of 2017 (first commit). At that time, the existing LSP implementation, RLS, had been providing IDE support for Rust for several years. While it handled well the most important features, it was clearly far from the experience offered by state-of-the-art IDEs for some other languages.
Originally, the plan was to just experiment with error-resilient parsing for Rust; but when you have a good parser, it is so tempting to add a simple LSP server on top of it. Long story short, it took surprisingly little effort to get to a prototype which was already useful as an IDE, which happened in Autumn 2018. At that critical point, the company Ferrous Systems (which was newborn itself) stepped in to fund further development of the prototype.
During 2019, the then nascent rust-analyzer community worked hard to build out the foundation of an IDE. By 2020, we realized that what we had built was no longer a prototype, but an already tremendously useful tool for day-to-day Rust programming. This culminated in RFC2912: "Transition to rust-analyzer as our official LSP (Language Server Protocol) implementation". The RFC was accepted with overwhelming support from the community: it is still the most upvoted Rust RFC ever. However, there was a wrinkle in the plan -- rust-analyzer was not an official Rust project! That's what we are fixing today!
Next, we will proceed with the plan outlined in the RFC: advertising rust-analyzer as the very likely future of Rust IDE support, gathering feedback, and, conditioned on the positive outcome of that, sunsetting RLS, the currently recommended language server. So, once again -- do try out rust-analyzer and leave feedback on the tracking issues.
After the transition, we will double down on the technical side of things.
As exciting as rust-analyzer is today, it only scratches the surface of what's possible when you bring the compiler's intricate understanding of the code right into the text editor. The end-game we are aiming for is creating an API to analyze and transform Rust code with full access to semantics.
One of the hardest nuts to crack for the present transition was the question of funding. Today, Rust is organized as a set of somewhat independent projects (rustc, cargo, rustup, rustfmt), and there's deliberately no way to fund a specific project directly. The new Rust Foundation is the official place to sponsor Rust in general, with the Foundation Board overseeing funds allocation. Yet, it has always been encouraged for individuals to seek individual funding. While the Rust project may advertise funding opportunities for individual contributors, it does not officially endorse these efforts nor does it facilitate the funding of entire teams.
rust-analyzer has received a significant share of funds from its OpenCollective and later GitHub Sponsors, managed by Ferrous Systems. This OpenCollective funded efforts by both individual contributors and Ferrous Systems employees. Details of this can be found in their transparency reports.
Luckily, the OpenCollective has always been managed in a way that would make it possible to transfer it to a different account holder. With this transition, the OpenCollective will be renamed from "rust-analyzer OpenCollective" to "Ferrous Systems OpenCollective (rust-analyzer)". This allows current sponsors to continue to sponsor and also make it clear that their chosen project will continue to be funded.
In a sense, the OpenCollective is handed to Ferrous Systems. All Sponsor credits will move to https://ferrous-systems.com/open-source/#sponsors.
We would like to thank Ferrous Systems for their openness and flexibility in the process, for their thoughtfulness in making sure the funding situation around rust-analyzer was clear, and for taking on the effort of fundraising.
Eventually the rust-analyzer GitHub Sponsors will also move away from the rust-analyzer GitHub organisation.
And of course, another great way for companies to support rust-analyzer development is to hire the people working on rust-analyzer to continue to do so.
We'd like to once again thank everyone who help get rust-analyzer to this point. From experiment to being well on its way to the officially recommended LSP implementation for Rust, we couldn't have done it without the help of our contributors, sponsors, and users.
So that's where we are at right now! Thanks to the awesome contributors to rustc, clippy, cargo, LSP, IntelliJ Rust, RLS and rust-analyzer, Rust today already enjoys great IDE support, even if it still has a bit of experimental flair to it.
Greetings Rustaceans!
Another year has passed, and with it comes another annual Rust survey analysis! The survey was conducted in December 2021.
We’d like to thank everyone who participated in this year’s survey, with a special shout-out to those who helped translate the survey from English into other languages.
Without further ado, let’s dive into the analysis!
The Rust community continues to grow, with this survey having the largest number of complete survey responses (9354 respondents), exceeding last year's total by roughly 1500 responses.
90% of respondents said that they use Rust for any purpose, while 5% stated they had used Rust at some point in the past but no longer do, and 4% stated they have yet to use Rust at all.
The survey was offered in 10 languages with 78% filling out the survey in English followed by Simplified Chinese (6%), German (4%), and French (3%). Despite English being the language most respondents completed the survey in, respondents hailed from all around the world. The United States was the country with the largest representation at 24% followed by Germany (12%), China (7%), and the U.K. (6%). In total 113 different countries and territories were represented through this survey!
English, however, is not the language of choice for all Rustaceans with nearly 7% preferring not to use English as a language for technical communication. An additional 23% of respondents prefer another language in addition to English. The most commonly preferred languages (besides English) roughly follow where Rustaceans live with Simplified Chinese, German, and French being the top 3. However, Japanese, Simplified Chinese, and Russian speakers were the most likely to prefer not to use English at all for technical conversation.
The percentage of people using Rust continues to rise. Of those using Rust, 81% are currently using it on at least a weekly basis compared to 72% from last year's survey.
75% of all Rust users say they can write production ready code, though 27% say that it is at times a struggle.
Overall, Rustaceans seem to be having a great time writing Rust with only 1% saying it isn't fun to use. Only a quarter of a percent find Rust doesn't have any real benefit over other programming languages.
Rust can now safely be classified as a language used by people in professional settings. Of those respondents using Rust, 59% use it at least occasionally at work with 23% using Rust for the majority of their coding. This is a large increase over last year where only 42% of respondents used Rust at work.
Adopting Rust at work seems to follow a long but ultimately worthwhile path for a lot of Rustaceans. First, 83% of those who have adopted Rust at work found it to be "challenging". How much this is related to Rust itself versus general challenges with adopting a new programming language, however, is unclear. During adoption only 13% of respondents found the language was slowing their team down and 82% found that Rust helped their teams achieve their goals.
After adoption, the costs seem to be justified: only 1% of respondents did not find the challenge worth it while 79% said it definitely was. When asked if their teams were likely to use Rust again in the future, 90% agreed. Finally, of respondents using Rust at work, 89% of respondents said their teams found it fun and enjoyable to program.
As for why respondents are using Rust at work, the top answer was that it allowed users "to build relatively correct and bug free software" with 96% of respondents agreeing with that statement. After correctness, performance (92%) was the next most popular choice. 89% of respondents agreed that they picked Rust at work because of Rust's much-discussed security properties.
Overall, Rust seems to be a language ready for the challenges of production, with only 3% of respondents saying that Rust was a "risky" choice for production use.
Overall, the annual survey points towards a growing, healthy community of Rustaceans, but this is not to say we don't have work ahead of us. Compile times, a historical focus of improvement for the Rust project, continue to not be where they need to be, with 61% of respondents saying work still needs to be done to improve them. Although, to the compiler team's credit, 61% found that they improved over the past year. Other areas indicated as in need of more improvement were disk space (45%), debugging (40%), and GUI development (56%).
The IDE experience (led through continued adoption and improvement of various tools like rust-analyzer, IntelliJ Rust, etc.) gets the prize for showing the most improvement: 56% found it has improved over the last year.
However, compiler error messages received the most praise, with 90% approval of their current state. 🎉
When asked what their biggest worries for the future of Rust are, the top answer was a fear that there will not be enough usage in industry (38%). Given that Rust continues to show strides in adoption at places of work, the community seems to be on a good path to overcoming this concern.
The next largest concern was that the language would become too complex (33%). This was combined with a relative small number of folks calling for additional features (especially for ones not already in the pipeline).
Finally, the third largest concern was that those working on Rust would not find the proper support they need to continue to develop the language and community in a healthy way (30%). With the establishment of the Rust Foundation, support structures are coming into place that hopefully will address this point, but no doubt plenty of work is still ahead of us.
2021 was arguably one of the most significant years in Rust's history - with the establishment of the Rust foundation, the 2021 edition, and a larger community than ever, Rust seems to be on a solid path as we head into the future.
Plenty of work remains, but here's hoping for a great 2022!
Every so often, the crates.io index's Git history is squashed into onecommitto minimize the history Cargo needs to download. When the index is squashed, we save snapshots to preserve the history of crate publishes.
Currently, those snapshots are stored as branches in the main index Git repository. Those branches are using server resources though, as the server still has to consider their contents whenever Cargo asks for the master branch. We will be deleting the snapshot branches from this repository to ensure that all objects referenced in the master branch will only be compressed against other objects in the master branch, ensuring that the current clone behavior will be much more efficient on the server side.
Here's how this might affect you:
You should not see any effects from this change. Cargo does not use the snapshot branches, and Cargo regularly handles index squashes. If you do see any issues, they are a bug, please reportthem on the Cargo repo.
In one week, on 2022-02-21, we will be removing all snapshot branches from the crates.io-index repo. All snapshot branches, both historical and in the future, are and will be in therust-lang/crates.io-index-archive repoinstead. Please update any scripts or tools referencing the snapshot branches by that time.
In the medium term, we're working to prioritize the completion of in-progresswork to add a way to serve the index as static files on HTTP, which will further ease the server load. The index repository will not be going away so that older versions of Cargo will continue to work. See RFC2789 for more details.
We want to say thanks to three people who recently have decided to step back from the Core Team:
We're thankful for Steve's, Florian's and Pietro's contributions to the Core Team & the Rust project in the past and we’re looking forward to any contributions they will still make in the future.
The Rust team has published a new point release of Rust, 1.58.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.58.1 is as easy as:
rustup update stable
If you don't have it already, you can get rustup
from the
appropriate page on our website.
Rust 1.58.1 fixes a race condition in the std::fs::remove_dir_all
standard
library function. This security vulnerability is tracked as CVE-2022-21658,
and you can read more about it on the advisory we published earliertoday. We recommend all users to update their toolchain immediately
and rebuild their programs with the updated compiler.
Rust 1.58.1 also addresses several regressions in diagnostics and tooling introduced in Rust 1.58.0:
non_send_fields_in_send_ty
Clippy lint was discovered to have too manyuseless_format
Clippy lint has been updated to handle capturedrustc
in some cases has been fixed.You can find more detailed information on the specific regressions in therelease notes.
Many people came together to create Rust 1.58.1. We couldn't have done it without all of you. Thanks!
This is a cross-post of the official security advisory. The official advisory contains a signed version with our PGP key, as well.
The Rust Security Response WG was notified that the std::fs::remove_dir_all
standard library function is vulnerable a race condition enabling symlink
following (CWE-363). An attacker could use this security issue to trick a
privileged program into deleting files and directories the attacker couldn't
otherwise access or delete.
This issue has been assigned CVE-2022-21658.
Let's suppose an attacker obtained unprivileged access to a system and needed
to delete a system directory called sensitive/
, but they didn't have the
permissions to do so. If std::fs::remove_dir_all
followed symbolic links,
they could find a privileged program that removes a directory they have access
to (called temp/
), create a symlink from temp/foo
to sensitive/
, and wait
for the privileged program to delete foo/
. The privileged program would
follow the symlink from temp/foo
to sensitive/
while recursively deleting,
resulting in sensitive/
being deleted.
To prevent such attacks, std::fs::remove_dir_all
already includes protection
to avoid recursively deleting symlinks, as described in its documentation:
This function does not follow symbolic links and it will simply remove the symbolic link itself.
Unfortunately that check was implemented incorrectly in the standard library, resulting in a TOCTOU (Time-of-check Time-of-use) race condition. Instead of telling the system not to follow symlinks, the standard library first checked whether the thing it was about to delete was a symlink, and otherwise it would proceed to recursively delete the directory.
This exposed a race condition: an attacker could create a directory and replace it with a symlink between the check and the actual deletion. While this attack likely won't work the first time it's attempted, in our experimentation we were able to reliably perform it within a couple of seconds.
Rust 1.0.0 through Rust 1.58.0 is affected by this vulnerability. We're going to release Rust 1.58.1 later today, which will include mitigations for this vulnerability. Patches to the Rust standard library are also available for custom-built Rust toolchains here.
Note that the following targets don't have usable APIs to properly mitigate the attack, and are thus still vulnerable even with a patched toolchain:
We recommend everyone to update to Rust 1.58.1 as soon as possible, especially people developing programs expected to run in privileged contexts (including system daemons and setuid binaries), as those have the highest risk of being affected by this.
Note that adding checks in your codebase before calling remove_dir_all
willnot mitigate the vulnerability, as they would also be vulnerable to race
conditions like remove_dir_all
itself. The existing mitigation is working as
intended outside of race conditions.
We want to thank Hans Kratz for independently discovering and disclosing this issue to us according to the Rust security policy, for developing the fix for UNIX-like targets and for reviewing fixes for other platforms.
We also want to thank Florian Weimer for reviewing the UNIX-like fix and for reporting the same issue back in 2018, even though the Security Response WG didn't realize the severity of the issue at the time.
Finally we want to thank Pietro Albini for coordinating the security response and writing this advisory, Chris Denton for writing the Windows fix, Alex Crichton for writing the WASI fix, and Mara Bos for reviewing the patches.
The Rust team is happy to announce a new version of Rust, 1.58.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.58.0 is as easy as:
rustup update stable
If you don't have it already, you can get rustup
from the appropriate page on our website, and check out thedetailed release notes for 1.58.0 on GitHub.
Rust 1.58 brings captured identifiers in format strings, a change to theCommand
search path on Windows, more #[must_use]
annotations in the
standard library, and some new library stabilizations.
Format strings can now capture arguments simply by writing {ident}
in the
string. Formats have long accepted positional arguments (optionally by index)
and named arguments, for example:
println!("Hello, {}!", get_person()); // implicit position
println!("Hello, {0}!", get_person()); // explicit index
println!("Hello, {person}!", person = get_person()); // named
Now named arguments can also be captured from the surrounding scope, like:
let person = get_person();
// ...
println!("Hello, {person}!"); // captures the local `person`
This may also be used in formatting parameters:
let (width, precision) = get_format();
for (name, score) in get_scores() {
println!("{name}: {score:width$.precision$}");
}
Format strings can only capture plain identifiers, not arbitrary paths or
expressions. For more complicated arguments, either assign them to a local name
first, or use the older name = expression
style of formatting arguments.
This feature works in all macros accepting format strings. However, one corner
case is the panic!
macro in 2015 and 2018 editions, where panic!("{ident}")
is still treated as an unformatted string -- the compiler will warn about this
not having the intended effect. Due to the 2021 edition's update of panic
macros for improved consistency, this works as expected in 2021 panic!
.
Command
search pathOn Windows targets, std::process::Command
will no longer search the current
directory for executables. That effect was owed to historical behavior of the
win32 CreateProcess
API, so Rust was effectively searching in this order:
PATH
environment variable, if it was explicitly changed from the parent.PATH
environment variable.However, using the current directory can lead to surprising results, or even
malicious behavior when dealing with untrusted directories. For example,ripgrep
published CVE-2021-3013 when they learned that their child
processes could be intercepted in this way. Even Microsoft's own PowerShelldocuments that they do not use the current directory for security.
Rust now performs its own search without the current directory, and the legacy
16-bit directory is also not included, as there is no API to discover its
location. So the new Command
search order for Rust on Windows is:
PATH
environment variable.PATH
environment variable.Non-Windows targets continue to use their platform-specific behavior, most
often only considering the child or parent PATH
environment variable.
#[must_use]
in the standard libraryThe #[must_use]
attribute can be applied to types or functions when failing
to explicitly consider them or their output is almost certainly a bug. This has
long been used in the standard library for types like Result
, which should be
checked for error conditions. This also helps catch mistakes such as expecting
a function to mutate a value in-place, when it actually returns a new value.
Library proposal 35 was approved in October 2021 to audit and expand the
application of #[must_use]
throughout the standard library, covering many
more functions where the primary effect is the return value. This is similar
to the idea of function purity, but looser than a true language feature. Some
of these additions were present in release 1.57.0, and now in 1.58.0 the effort
has completed.
The following methods and trait implementations were stabilized.
Metadata::is_symlink
Path::is_symlink
{integer}::saturating_div
Option::unwrap_unchecked
Result::unwrap_unchecked
Result::unwrap_err_unchecked
The following previously stable functions are now const
.
Duration::new
Duration::checked_add
Duration::saturating_add
Duration::checked_sub
Duration::saturating_sub
Duration::checked_mul
Duration::saturating_mul
Duration::checked_div
There are other changes in the Rust 1.58.0 release: check out what changed inRust,Cargo, and Clippy.
Many people came together to create Rust 1.58.0. We couldn't have done it without all of you.Thanks!
It's that time again! Time for us to take a look at who the Rust community is composed of, how the Rust project is doing, and how we can improve the Rust programming experience. The Rust Community Team is pleased to announce our 2021 State of Rust Survey! Whether or not you use Rust today, we want to know your opinions. Your responses will help the project understand its strengths and weaknesses, and establish development priorities for the future.
Completing this survey should take about 10–30 minutes and is anonymous. We will be accepting submissions for the next two weeks (until the 22nd of December), and we will write up our findings afterwards to blog.rust-lang.org. You can also check out last year’s results.
(If you speak multiple languages, please pick one)
Please help us spread the word by sharing the survey link on your social network feeds, at meetups, around your office, and in other communities.
If you have any questions, please see our frequently asked questions or email the Rust Community team at community-team@rust-lang.org.
Finally, we wanted to thank everyone who helped develop, polish, and test the survey. In particular, we'd like to thank all of the volunteers who worked to provide all of the translations available this year and who will help to translate the results.
The Rust team is happy to announce a new version of Rust, 1.57.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.57.0 is as easy as:
rustup update stable
If you don't have it already, you can get rustup
from the appropriate page on our website, and check out thedetailed release notes for 1.57.0 on GitHub.
Rust 1.57 brings panic!
to const contexts, adds support for custom profiles to Cargo, and stabilizes fallible reservation APIs.
panic!
in const contextsWith previous versions of Rust, the panic!
macro was not usable in const fn
and other compile-time contexts. Now, this has been stabilized. Together with the stabilization of panic!
, several other standard library APIs are now usable in const, such as assert!
.
This stabilization does not yet include the full formatting infrastructure, so the panic!
macro must be called with either a static string (panic!("...")
), or with a single &str
interpolated value (panic!("{}", a)
) which must be used with {}
(no format specifiers or other traits).
It is expected that in the future this support will expand, but this minimal stabilization already enables straightforward compile-time assertions, for example to verify the size of a type:
const _: () = assert!(std::mem::size_of::<u64>() == 8);
const _: () = assert!(std::mem::size_of::<u8>() == 1);
Cargo has long supported four profiles: dev
, release
, test
, and bench
. With Rust 1.57, support has been added for arbitrarily named profiles.
For example, if you want to enable link time optimizations (LTO) only when making the final production build, adding the following snippet to Cargo.toml enables the lto
flag when this profile is selected, but avoids enabling it for regular release builds.
[profile.production]
inherits = "release"
lto = true
Note that custom profiles must specify a profile from which they inherit default settings. Once the profile has been defined, Cargo commands which build code can be asked to use it with --profile production
. Currently, this will build artifacts in a separate directory (target/production
in this case), which means that artifacts are not shared between directories.
Rust 1.57 stabilizes try_reserve
for Vec
, String
, HashMap
, HashSet
, and VecDeque
. This API enables callers to fallibly allocate the backing storage for these types.
Rust will usually abort the process if the global allocator fails, which is not always desirable. This API provides a method for avoiding that abort when working with the standard library collections. However, Rust does not guarantee that the returned memory is actually allocated by the kernel: for example, if overcommit is enabled on Linux, the memory may not be available when its use is attempted.
The following methods and trait implementations were stabilized.
[T; N]::as_mut_slice
[T; N]::as_slice
collections::TryReserveError
HashMap::try_reserve
HashSet::try_reserve
String::try_reserve
String::try_reserve_exact
Vec::try_reserve
Vec::try_reserve_exact
VecDeque::try_reserve
VecDeque::try_reserve_exact
Iterator::map_while
iter::MapWhile
proc_macro::is_available
Command::get_program
Command::get_args
Command::get_envs
Command::get_current_dir
CommandArgs
CommandEnvs
The following previously stable functions are now const
.
There are other changes in the Rust 1.57.0 release: check out what changed inRust,Cargo, and Clippy.
Many people came together to create Rust 1.57.0. We couldn't have done it without all of you.Thanks!
The Rust team has published a new point release of Rust, 1.56.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.56.1 is as easy as:
rustup update stable
If you don't have it already, you can get rustup
from the
appropriate page on our website.
Rust 1.56.1 introduces two new lints to mitigate the impact of a security concern recently disclosed, CVE-2021-42574. We recommend all users upgrade immediately to ensure their codebase is not affected by the security issue.
You can learn more about the security issue in the advisory.
This is a cross-post of the official security advisory. The official advisory contains a signed version with our PGP key, as well.
The Rust Security Response WG was notified of a security concern affecting source code containing "bidirectional override" Unicode codepoints: in some cases the use of those codepoints could lead to the reviewed code being different than the compiled code.
This is a vulnerability in the Unicode specification, and its assigned identifier is CVE-2021-42574. While the vulnerability itself is not a rustc flaw, we're taking proactive measures to mitigate its impact on Rust developers.
Unicode has support for both left-to-right and right-to-left languages, and to aid writing left-to-right words inside a right-to-left sentence (or vice versa) it also features invisible codepoints called "bidirectional override".
These codepoints are normally used across the Internet to embed a word inside a sentence of another language (with a different text direction), but it was reported to us that they could be used to manipulate how source code is displayed in some editors and code review tools, leading to the reviewed code being different than the compiled code. This is especially bad if the whole team relies on bidirectional-aware tooling.
As an example, the following snippet (with {U+NNNN}
replaced with the Unicode
codepoint NNNN
):
if access_level != "user{U+202E} {U+2066}// Check if admin{U+2069} {U+2066}" {
...would be rendered by bidirectional-aware tools as:
if access_level != "user" { // Check if admin
Rust 1.56.1 introduces two new lints to detect and reject code containing the affected codepoints. Rust 1.0.0 through Rust 1.56.0 do not include such lints, leaving your source code vulnerable to this attack if you do not perform out-of-band checks for the presence of those codepoints.
To assess the security of the ecosystem we analyzed all crate versions ever published on crates.io (as of 2021-10-17), and only 5 crates have the affected codepoints in their source code, with none of the occurrences being malicious.
We will be releasing Rust 1.56.1 today, 2021-11-01, with two new deny-by-default lints detecting the affected codepoints, respectively in string literals and in comments. The lints will prevent source code files containing those codepoints from being compiled, protecting you from the attack.
If your code has legitimate uses for the codepoints we recommend replacing them with the related escape sequence. The error messages will suggest the right escapes to use.
If you can't upgrade your compiler version, or your codebase also includes non-Rust source code files, we recommend periodically checking that the following codepoints are not present in your repository and your dependencies: U+202A, U+202B, U+202C, U+202D, U+202E, U+2066, U+2067, U+2068, U+2069.
Thanks to Nicholas Boucher and Ross Anderson from the University of Cambridge for disclosing this to us according to our security policy!
We also want to thank the members of the Rust project who contributed to the mitigations for this issue. Thanks to Esteban Küber for developing the lints, Pietro Albini for leading the security response, and many others for their involvement, insights and feedback: Josh Stone, Josh Triplett, Manish Goregaokar, Mara Bos, Mark Rousskov, Niko Matsakis, and Steve Klabnik.
As part of their research, Nicholas Boucher and Ross Anderson also uncovered a similar security issue identified as CVE-2021-42694 involving homoglyphs inside identifiers. Rust already includes mitigations for that attack since Rust 1.53.0. Rust 1.0.0 through Rust 1.52.1 is not affected due to the lack of support for non-ASCII identifiers in those releases.
The Rust team is happy to announce a new version of Rust, 1.56.0. This stabilizes the 2021 edition as well. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.56.0 is as easy as:
rustup update stable
If you don't have it already, you can get rustup
from the appropriate page on our website, and check out thedetailed release notes for 1.56.0 on GitHub.
We wrote about plans for the Rust 2021 Edition in May. Editions are a mechanism for opt-in changes that may otherwise pose backwards compatibility risk. See the edition guide for details on how this is achieved. This a smaller edition, especially compared to 2018, but there are still some nice quality-of-life changes that require an edition opt-in to avoid breaking some corner cases in existing code. See the new chapters of the edition guide below for more details on each new feature and guidance for migration.
IntoIterator
for arrays: array.into_iter()
now iterates over items by value instead of by reference.A|B
in :pat
.TryInto
, TryFrom
, and FromIterator
are now in scope by default.println!()
.ident#
, ident"..."
, and ident'...'
.bare_trait_objects
and ellipsis_inclusive_range_patterns
.Closures automatically capture values or references to identifiers that are used in the body, but before 2021, they were always captured as a whole. The new disjoint-capture feature will likely simplify the way you write closures, so let's look at a quick example:
// 2015 or 2018 edition code
let a = SomeStruct::new();
// Move out of one field of the struct
drop(a.x);
// Ok: Still use another field of the struct
println!("{}", a.y);
// Error: Before 2021 edition, tries to capture all of `a`
let c = || println!("{}", a.y);
c();
To fix this, you would have had to extract something like let y = &a.y;
manually before the closure to limit its capture. Starting in Rust 2021,
closures will automatically capture only the fields that they use, so the
above example will compile fine!
This new behavior is only activated in the new edition, since it can change
the order in which fields are dropped. As for all edition changes, an
automatic migration is available, which will update your closures for which
this matters by inserting let _ = &a;
inside the closure to force the
entire struct to be captured as before.
The guide includes migration instructions for all new features, and in generaltransitioning an existing project to a new edition.
In many cases cargo fix
can automate the necessary changes. You may even
find that no changes in your code are needed at all for 2021!
However small this edition appears on the surface, it's still the product of a lot of hard work from many contributors: see our dedicatedcelebration and thanks tracker!
rust-version
Cargo.toml
now supports a [package]
rust-version
field to specify
the minimum supported Rust version for a crate, and Cargo will exit with an
early error if that is not satisfied. This doesn't currently influence the
dependency resolver, but the idea is to catch compatibility problems before
they turn into cryptic compiler errors.
binding @ pattern
Rust pattern matching can be written with a single identifier that binds
the entire value, followed by @
and a more refined structural pattern,
but this has not allowed additional bindings in that pattern -- until now!
struct Matrix {
data: Vec<f64>,
row_len: usize,
}
// Before, we need separate statements to bind
// the whole struct and also read its parts.
let matrix = get_matrix();
let row_len = matrix.row_len;
// or with a destructuring pattern:
let Matrix { row_len, .. } = matrix;
// Rust 1.56 now lets you bind both at once!
let matrix @ Matrix { row_len, .. } = get_matrix();
This actually was allowed in the days before Rust 1.0, but that was removed due to known unsoundness at the time. With the evolution of the borrow checker since that time, and with heavy testing, the compiler team determined that this was safe to finally allow in stable Rust!
The following methods and trait implementations were stabilized.
std::os::unix::fs::chroot
UnsafeCell::raw_get
BufWriter::into_parts
core::panic::{UnwindSafe, RefUnwindSafe, AssertUnwindSafe}
std
)Vec::shrink_to
String::shrink_to
OsString::shrink_to
PathBuf::shrink_to
BinaryHeap::shrink_to
VecDeque::shrink_to
HashMap::shrink_to
HashSet::shrink_to
The following previously stable functions are now const
.
There are other changes in the Rust 1.56.0 release: check out what changed inRust,Cargo, and Clippy.
Many people came together to create Rust 1.56.0 and the 2021 edition. We couldn't have done it without all of you.Thanks!
The Rust Core team is excited to announce the first of a series of changes to its structure we’ve been planning for 2021, starting today by adding several new members.
Originally, the Core team was composed of the leads from each Rust team. However, as Rust has grown, this has long stopped being true; most members of the Core team are not team leads in the project. In part, this is because Core’s duties have evolved significantly away from the original technical focus. Today, we see the Core team’s purpose as enabling, amplifying, and supporting the excellent work of every Rust team. Notably, this included setting up andlaunching the Rust Foundation.
We know that our maintainers, and especially team leads, dedicate an enormous amount of time to their work on Rust. We care deeply that it’s possible for not just people working full time on Rust to be leaders, but that part time volunteers can as well. To enable this, we wish to avoid coupling leading a team with a commitment to stewarding the project as a whole as part of the Core team. Likewise, it is important that members of the Core team have the option to dedicate their time to just the Core team’s activities and serve the project in that capacity only.
Early in the Rust project, composition of the Core team was made up of almost entirely Mozilla employees working full time on Rust. Because this team was made up of team leads, it follows that team leads were also overwhelmingly composed of Mozilla employees. As Rust has grown, folks previously employed at Mozilla left for new jobs and new folks appeared. Many of the new folks were not employed to work on Rust full time so the collective time investment was decreased and the shape of the core team’s work schedule shifted from 9-5 to a more volunteer cadence. Currently, the Core team is composed largely of volunteers, and no member of the Core team is employed full time to work on their Core team duties.
We know that it’s critical to driving this work successfully to have stakeholders on the team who are actively working in all areas of the project to help prioritize the Core team’s initiatives. To serve this goal, we are announcing some changes to the Core team’s membership today: Ryan Levick, Jan-Erik Rediger, and JT are joining the Core team. To give some context on their backgrounds and experiences, each new member has written up a brief introduction.
These new additions will add fresh perspectives along several axes, including geographic and employment diversity. However, we recognize there are aspects of diversity we can continue to improve. We see this work as critical to the ongoing health of the Rust project and is part of the work that will be coordinated between the Rust core team and the Rust Foundation.
Manish Goregaokar is also leaving the team to be able to focus better on the dev-tools team. Combining team leadership with Core team duties is a heavy burden. While Manish has enjoyed his time working on project-wide initiatives, this coupling isn’t quite fair to the needs of the devtools team, and he’s glad to be able to spend more time on the devtools team moving forward.
The Core team has been doing a lot of work in figuring out how to improve how we work and how we interface with the rest of the project. We’re excited to be able to share more on this in future updates.
We're super excited for Manish’s renewed efforts on the dev tools team and for JT, Ryan, and Jan-Erik to get started on core team work! Congrats and good luck!
This post is part 1 of a multi-part series on updates to the Rust core team.
The Rust team is happy to announce a new version of Rust, 1.55.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.55.0 is as easy as:
rustup update stable
If you don't have it already, you can get rustup
from the appropriate page on our website, and check out thedetailed release notes for 1.55.0 on GitHub.
In past releases, when running cargo test
, cargo check --all-targets
, or similar commands which built the same Rust crate in multiple configurations, errors and warnings could show up duplicated as the rustc's were run in parallel and both showed the same warning.
For example, in 1.54.0, output like this was common:
$ cargo +1.54.0 check --all-targets
Checking foo v0.1.0
warning: function is never used: `foo`
--> src/lib.rs:9:4
|
9 | fn foo() {}
| ^^^
|
= note: `#[warn(dead_code)]` on by default
warning: 1 warning emitted
warning: function is never used: `foo`
--> src/lib.rs:9:4
|
9 | fn foo() {}
| ^^^
|
= note: `#[warn(dead_code)]` on by default
warning: 1 warning emitted
Finished dev [unoptimized + debuginfo] target(s) in 0.10s
In 1.55, this behavior has been adjusted to deduplicate and print a report at the end of compilation:
$ cargo +1.55.0 check --all-targets
Checking foo v0.1.0
warning: function is never used: `foo`
--> src/lib.rs:9:4
|
9 | fn foo() {}
| ^^^
|
= note: `#[warn(dead_code)]` on by default
warning: `foo` (lib) generated 1 warning
warning: `foo` (lib test) generated 1 warning (1 duplicate)
Finished dev [unoptimized + debuginfo] target(s) in 0.84s
The standard library's implementation of float parsing has been updated to use the Eisel-Lemire algorithm, which brings both speed improvements and improved correctness. In the past, certain edge cases failed to parse, and this has now been fixed.
You can read more details on the new implementation in the pull request description.
std::io::ErrorKind
variants updatedstd::io::ErrorKind
is a #[non_exhaustive]
enum that classifies errors into portable categories, such as NotFound
or WouldBlock
. Rust code that has a std::io::Error
can call the kind
method to obtain a std::io::ErrorKind
and match on that to handle a specific error.
Not all errors are categorized into ErrorKind
values; some are left uncategorized and placed in a catch-all variant. In previous versions of Rust, uncategorized errors used ErrorKind::Other
; however, user-created std::io::Error
values also commonly used ErrorKind::Other
. In 1.55, uncategorized errors now use the internal variant ErrorKind::Uncategorized
, which we intend to leave hidden and never available for stable Rust code to name explicitly; this leaves ErrorKind::Other
exclusively for constructing std::io::Error
values that don't come from the standard library. This enforces the #[non_exhaustive]
nature of ErrorKind
.
Rust code should never match ErrorKind::Other
and expect any particular underlying error code; only match ErrorKind::Other
if you're catching a constructed std::io::Error
that uses that error kind. Rust code matching on std::io::Error
should always use _
for any error kinds it doesn't know about, in which case it can match the underlying error code, or report the error, or bubble it up to calling code.
We're making this change to smooth the way for introducing new ErrorKind variants in the future; those new variants will start out nightly-only, and only become stable later. This change ensures that code matching variants it doesn't know about must use a catch-all _
pattern, which will work both with ErrorKind::Uncategorized
and with future nightly-only variants.
Rust 1.55 stabilized using open ranges in patterns:
match x as u32 {
0 => println!("zero!"),
1.. => println!("positive number!"),
}
Read more details here.
The following methods and trait implementations were stabilized.
Bound::cloned
Drain::as_str
IntoInnerError::into_error
IntoInnerError::into_parts
MaybeUninit::assume_init_mut
MaybeUninit::assume_init_ref
MaybeUninit::write
array::map
ops::ControlFlow
x86::_bittest
x86::_bittestandcomplement
x86::_bittestandreset
x86::_bittestandset
x86_64::_bittest64
x86_64::_bittestandcomplement64
x86_64::_bittestandreset64
x86_64::_bittestandset64
The following previously stable functions are now const
.
There are other changes in the Rust 1.55.0 release: check out what changed in Rust, Cargo, and Clippy.
Many people came together to create Rust 1.55.0. We couldn't have done it without all of you.Thanks!
Anna Harren was a member of the community and contributor to Rust known for coining the term "Turbofish" to describe ::<>
syntax. Anna recently passed away after living with cancer. Her contribution will forever be remembered and be part of the language, and we dedicate this release to her memory.
Where to start, where to start...
Let's begin by saying: this is a very exciting post. Some people reading this will be overwhelmingly thrilled; some will have no idea what GATs (generic associated types) are; others might be in disbelief. The RFC for this feature did get opened in April of 2016 (and merged about a year and a half later). In fact, this RFC even predates const generics (which an MVP of was recently stabilized). Don't let this fool you though: it is a powerful feature; and the reactions to the tracking issue on Github should maybe give you an idea of its popularity (it is the most upvoted issue on the Rust repository):
If you're not familiar with GATs, they allow you to define type, lifetime, or const generics on associated types. Like so:
trait Foo {
type Bar<'a>;
}
Now, this may seem underwhelming, but I'll go into more detail later as to why this really is a powerful feature.
But for now: what exactly is happening? Well, nearly four years after its RFC was merged, the generic_associated_types
feature is no longer "incomplete."
crickets chirping
Wait...that's it?? Well, yes! I'll go into a bit of detail later in this blog post as to why this is a big deal. But, long story short, there have been a good amount of changes that have had to have been made to the compiler to get GATs to work. And, while there are still a few small remaining diagnostics issues, the feature is finally in a space that we feel comfortable making it no longer "incomplete".
So, what does that mean? Well, all it really means is that when you use this feature on nightly, you'll no longer get the "generic_associated_types
is incomplete" warning. However, the real reason this is a big deal: we want to stabilize this feature. But we need your help. We need you to test this feature, to file issues for any bugs you find or for potential diagnostic improvements. Also, we'd love for you to just tell us about some interesting patterns that GATs enable over on Zulip!
Without making promises that we aren't 100% sure we can keep, we have high hopes we can stabilize this feature within the next couple months. But, we want to make sure we aren't missing glaringly obvious bugs or flaws. We want this to be a smooth stabilization.
Okay. Phew. That's the main point of this post and the most exciting news. But as I said before, I think it's also reasonable for me to explain what this feature is, what you can do with it, and some of the background and how we got here.
Note: this will only be a brief overview. The RFC contains many more details.
GATs (generic associated types) were originally proposed in RFC 1598. As said before, they allow you to define type, lifetime, or const generics on associated types. If you're familiar with languages that have "higher-kinded types", then you could call GATs type constructors on traits. Perhaps the easiest way for you to get a sense of how you might use GATs is to jump into an example.
Here is a popular use case: a LendingIterator
(formerly known as a StreamingIterator
):
trait LendingIterator {
type Item<'a> where Self: 'a;
fn next<'a>(&'a mut self) -> Option<Self::Item<'a>>;
}
Let's go through one implementation of this, a hypothetical <[T]>::windows_mut
, which allows for iterating through overlapping mutable windows on a slice. If you were to try to implement this with Iterator
today like
struct WindowsMut<'t, T> {
slice: &'t mut [T],
start: usize,
window_size: usize,
}
impl<'t, T> Iterator for WindowsMut<'t, T> {
type Item = &'t mut [T];
fn next<'a>(&'a mut self) -> Option<Self::Item> {
let retval = self.slice[self.start..].get_mut(..self.window_size)?;
self.start += 1;
Some(retval)
}
}
then you would get an error.
error[E0495]: cannot infer an appropriate lifetime for lifetime parameter in function call due to conflicting requirements
--> src/lib.rs:9:22
|
9 | let retval = self.slice[self.start..].get_mut(..self.window_size)?;
| ^^^^^^^^^^^^^^^^^^^^^^^^
|
note: first, the lifetime cannot outlive the lifetime `'a` as defined on the method body at 8:13...
--> src/lib.rs:8:13
|
8 | fn next<'a>(&'a mut self) -> Option<Self::Item> {
| ^^
note: ...so that reference does not outlive borrowed content
--> src/lib.rs:9:22
|
9 | let retval = self.slice[self.start..].get_mut(..self.window_size)?;
| ^^^^^^^^^^
note: but, the lifetime must be valid for the lifetime `'t` as defined on the impl at 6:6...
--> src/lib.rs:6:6
|
6 | impl<'t, T: 't> Iterator for WindowsMut<'t, T> {
| ^^
Put succinctly, this error is essentially telling us that in order for us to be able to return a reference to self.slice
, it must live as long as 'a
, which would require a 'a: 't
bound (which we can't provide). Without this, we could call next
while already holding a reference to the slice, creating overlapping mutable references. However, it does compile fine if you were to implement this using the LendingIterator
trait from before:
impl<'t, T> LendingIterator for WindowsMut<'t, T> {
type Item<'a> where Self: 'a = &'a mut [T];
fn next<'a>(&'a mut self) -> Option<Self::Item<'a>> {
let retval = self.slice[self.start..].get_mut(..self.window_size)?;
self.start += 1;
Some(retval)
}
}
As an aside, there's one thing to note about this trait and impl that you might be curious about: the where Self: 'a
clause on Item
. Briefly, this allows us to use &'a mut [T]
; without this where clause, someone could try to return Self::Item<'static>
and extend the lifetime of the slice. We understand that this is a point of confusion sometimes and are considering potential alternatives, such as always assuming this bound or implying it based on usage within the trait (see this issue). We definitely would love to hear about your use cases here, particularly when assuming this bound would be a hindrance.
As another example, imagine you wanted a struct to be generic over a pointer to a specific type. You might write the following code:
trait PointerFamily {
type Pointer<T>: Deref<Target = T>;
fn new<T>(value: T) -> Self::Pointer<T>;
}
struct ArcFamily;
struct RcFamily;
impl PointerFamily for ArcFamily {
type Pointer<T> = Arc<T>;
...
}
impl PointerFamily for RcFamily {
type Pointer<T> = Rc<T>;
...
}
struct MyStruct<P: PointerFamily> {
pointer: P::Pointer<String>,
}
We won't go in-depth on the details here, but this example is nice in that it not only highlights the use of types in GATs, but also shows that you can still use the trait bounds that you already can use on associated types.
These two examples only scratch the surface of the patterns that GATs support. If you find any that seem particularly interesting or clever, we would love to hear about them over on Zulip!
So what has caused us to have taken nearly four years to get to the point that we are now? Well, it's hard to put into words how much the existing trait solver has had to change and adapt; but, consider this: for a while, it was thought that to support GATs, we would have to transition rustc to use Chalk, a potential future trait solver that uses logical predicates to solve trait goals (though, while some progress has been made, it's still very experimental even now).
For reference, here are some various implementation additions and changes that have been made that have furthered GAT support in some way or another:
And to further emphasize the work above: many of these PRs are large and have considerable design work behind them. There are also several smaller PRs along the way. But, we made it. And I just want to congratulate everyone who's put effort into this one way or another. You rock.
Ok, so now comes the part that nobody likes hearing about: the limitations. Fortunately, in this case, there's really only one GAT limitation: traits with GATs are not object safe. This means you won't be able to do something like
fn takes_iter(_: &mut dyn for<'a> LendingIterator<Item<'a> = &'a i32>) {}
The biggest reason for this decision is that there's still a bit of design and implementation work to actually make this usable. And while this is a nice feature, adding this in the future would be a backward-compatible change. We feel that it's better to get most of GATs stabilized and then come back and try to tackle this later than to block GATs for even longer. Also, GATs without object safety are still very powerful, so we don't lose much by defering this.
As was mentioned earlier in this post, there are still a couple remaining diagnostics issues. If you do find bugs though, please file issues!
The Rust team is happy to announce a new version of Rust, 1.54.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.54.0 is as easy as:
rustup update stable
If you don't have it already, you can get rustup
from the appropriate page on our website, and check out thedetailed release notes for 1.54.0 on GitHub.
Rust 1.54 supports invoking function-like macros inside attributes. Function-like macros can be either macro_rules!
based or procedural macros which are invoked like macro!(...)
. One notable use case for this is including documentation from other files into Rust doc comments. For example, if your project's README represents a good documentation comment, you can use include_str!
to directly incorporate the contents. Previously, various workarounds allowed similar functionality, but from 1.54 this is much more ergonomic.
#![doc = include_str!("README.md")]
Macros can be nested inside the attribute as well, for example to include content generated by a build script:
#[path = concat!(env!("OUT_DIR"), "/generated.rs")]
mod generated;
Read here for more details.
A number of intrinsics for the wasm32 platform have been stabilized, which gives access to the SIMD instructions in WebAssembly.
Notably, unlike the previously stabilized x86
and x86_64
intrinsics, these do not have a safety requirement to only be called when the appropriate target feature is enabled. This is because WebAssembly was written from the start to validate code safely before executing it, so instructions are guaranteed to be decoded correctly (or not at all).
This means that we can expose some of the intrinsics as entirely safe functions, for example v128_bitselect
. However, there are still some intrinsics which are unsafe because they use raw pointers, such as v128_load
.
Incremental compilation has been re-enabled by default in this release, after it being disabled by default in 1.52.1.
In Rust 1.52, additional validation was added when loading incremental compilation data from the on-disk cache. This resulted in a number of pre-existing potential soundness issues being uncovered as the validation changed these silent bugs into internal compiler errors (ICEs). In response, the Compiler Team decided to disable incremental compilation in the 1.52.1 patch, allowing users to avoid encountering the ICEs and the underlying unsoundness, at the expense of longer compile times. 1
Since then, we've conducted a series of retrospectives and contributors have been hard at work resolving the reported issues, with some fixes landing in 1.53 and the majority landing in this release. 2
There are currently still two known issues which can result in an ICE. Due to the lack of automated crash reporting, we can't be certain of the full extent of impact of the outstanding issues. However, based on the feedback we received from users affected by the 1.52 release, we believe the remaining issues to be rare in practice.
Therefore, incremental compilation has been re-enabled in this release!
The following methods and trait implementations were stabilized.
BTreeMap::into_keys
BTreeMap::into_values
HashMap::into_keys
HashMap::into_values
arch::wasm32
VecDeque::binary_search
VecDeque::binary_search_by
VecDeque::binary_search_by_key
VecDeque::partition_point
There are other changes in the Rust 1.54.0 release: check out what changed in Rust, Cargo, and Clippy.
rustfmt has also been fixed in the 1.54.0 release to properly format nested out-of-line modules. This may cause changes in formatting to files that were being ignored by the 1.53.0 rustfmt. See details here.
Many people came together to create Rust 1.54.0. We couldn't have done it without all of you.Thanks!
We are happy to announce that the Rust 2021 edition is entering its public testing period. All of the planned features for the edition are now available on nightly builds along with migrations that should move your code from Rust 2018 to Rust 2021. If you'd like to learn more about the changes that are part of Rust 2021, check out the nightly version of the Edition Guide.
As we enter the public testing period, we are encouraging adventurous users to test migrating their crates over to Rust 2021. As always, we expect this to be a largely automated process. The steps to try out the Rust 2021 Edition as follows (more detailed directions can be found here):
rustup update nightly
.cargo +nightly fix --edition
.Cargo.toml
and place cargo-features = ["edition2021"]
at the top (above [package]
), and change the edition field to say edition = "2021"
.cargo +nightly check
to verify it now works in the new edition.Note that Rust 2021 is still unstable, so you can expect bugs and other changes! We recommend migrating your crates in a temporary copy of your code versus your main branch. If you do encounter problems, or find areas where quality could be improved (missing documentation, confusing error messages, etc) please file an issue and tell us about it! Thank you!
We are targeting stabilization of all Rust 2021 for Rust 1.56, which will be released on October 21st, 2021. Per the Rust train release model, that means all features and work must be landed on nightly by September 7th.
The Rust team is happy to announce a new version of Rust, 1.53.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.53.0 is as easy as:
rustup update stable
If you don't have it already, you can get rustup
from the appropriate page on our website, and check out thedetailed release notes for 1.53.0 on GitHub.
This release contains several new language features and many new library features,
including the long-awaited IntoIterator
implementation for arrays.
See the detailed release notesto learn about other changes not covered by this post.
This is the first Rust release in which arrays implement the IntoIterator
trait.
This means you can now iterate over arrays by value:
for i in [1, 2, 3] {
..
}
Previously, this was only possible by reference, using &[1, 2, 3]
or [1, 2, 3].iter()
.
Similarly, you can now pass arrays to methods expecting a T: IntoIterator
:
let set = BTreeSet::from_iter([1, 2, 3]);
for (a, b) in some_iterator.chain([1]).zip([1, 2, 3]) {
..
}
This was not implemented before, due to backwards compatibility problems.
Because IntoIterator
was already implemented for references to arrays,array.into_iter()
already compiled in earlier versions,
resolving to (&array).into_iter()
.
As of this release, arrays implement IntoIterator
with a small workaround to avoid breaking code.
The compiler will continue to resolve array.into_iter()
to (&array).into_iter()
,
as if the trait implementation does not exist.
This only applies to the .into_iter()
method call syntax, and does not
affect any other syntax such as for e in [1, 2, 3]
, iter.zip([1, 2, 3])
orIntoIterator::into_iter([1, 2, 3])
, which all compile fine.
Since this special case for .into_iter()
is only required to avoid breaking existing code,
it is removed in the new edition, Rust 2021, which will be released later this year.
See the edition announcementfor more information.
Pattern syntax has been extended to support |
nested anywhere in the pattern.
This enables you to write Some(1 | 2)
instead of Some(1) | Some(2)
.
match result {
Ok(Some(1 | 2)) => { .. }
Err(MyError { kind: FileNotFound | PermissionDenied, .. }) => { .. }
_ => { .. }
}
Identifiers can now contain non-ascii characters. All valid identifier characters in Unicode as defined in UAX #31 can now be used. That includes characters from many different scripts and languages, but does not include emoji.
For example:
const BLÅHAJ: &str = "🦈";
struct 人 {
名字: String,
}
let α = 1;
The compiler will warn about potentially confusing situations involving different scripts. For example, using identifiers that look very similar will result in a warning.
warning: identifier pair considered confusable between `s` and `s`
Cargo no longer assumes the default HEAD
of git repositories is named master
.
This means you no longer need to specify branch = "main"
for git dependencies
from a repository where the default branch is called main
.
As previously discussed on the blog post for version 1.52.1, incremental compilation has been turned off by default on the stable Rust release channel. The feature remains available on the beta and nightly release channels. For the 1.53.0 stable release, the method for reenabling incremental is unchanged from 1.52.1.
The following methods and trait implementations were stabilized.
array::from_ref
array::from_mut
AtomicBool::fetch_update
AtomicPtr::fetch_update
BTreeSet::retain
BTreeMap::retain
BufReader::seek_relative
cmp::min_by
cmp::min_by_key
cmp::max_by
cmp::max_by_key
DebugStruct::finish_non_exhaustive
Duration::ZERO
Duration::MAX
Duration::is_zero
Duration::saturating_add
Duration::saturating_sub
Duration::saturating_mul
f32::is_subnormal
f64::is_subnormal
IntoIterator for array
{integer}::BITS
io::Error::Unsupported
NonZero*::leading_zeros
NonZero*::trailing_zeros
Option::insert
Ordering::is_eq
Ordering::is_ne
Ordering::is_lt
Ordering::is_gt
Ordering::is_le
Ordering::is_ge
OsStr::make_ascii_lowercase
OsStr::make_ascii_uppercase
OsStr::to_ascii_lowercase
OsStr::to_ascii_uppercase
OsStr::is_ascii
OsStr::eq_ignore_ascii_case
Peekable::peek_mut
Rc::increment_strong_count
Rc::decrement_strong_count
slice::IterMut::as_slice
AsRef<[T]> for slice::IterMut
impl SliceIndex for (Bound<usize>, Bound<usize>)
Vec::extend_from_within
There are other changes in the Rust 1.53.0 release: check out what changed inRust,Cargo, and Clippy.
Many people came together to create Rust 1.53.0. We couldn't have done it without all of you.Thanks!
The rustup working group is happy to announce the release of rustup version 1.24.3. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of rustup installed, getting rustup 1.24.3 is as easy as closing your IDE and running:
rustup self update
Rustup will also automatically update itself at the end of a normal toolchain update:
rustup update
If you don't have it already, you can get rustup from the appropriate page on our website.
This patch release focusses around resolving some regressions in behaviour in the 1.24.x series, in either low tier platforms, or unusual situations around very old toolchains.
Full details are available in the changelog!
Rustup's documentation is also available in the rustup book.
Thanks again to all the contributors who made rustup 1.24.3 possible!
The rustup working group is happy to announce the release of rustup version 1.24.2. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of rustup installed, getting rustup 1.24.2 is as easy as closing your IDE and running:
rustup self update
Rustup will also automatically update itself at the end of a normal toolchain update:
rustup update
If you don't have it already, you can get rustup from the appropriate page on our website.
1.24.2 introduces pooled allocations to prevent memory fragmentation issues on some platforms with 1.24.x. We're not entirely sure what aspect of the streamed unpacking logic caused allocator fragmentation, but memory pools are a well known fix that should solve this for all platforms.
Those who were encountering CI issues with 1.24.1 should find them resolved.
You can check out all the changes to Rustup for 1.24.2 in the changelog!
Rustup's documentation is also available in the rustup book.
Finally, the Rustup working group are pleased to welcome a new member. Between 1.24.1 and 1.24.2 二手掉包工程师 (hi-rustin) has joined, having already made some excellent contributions.
Thanks again to all the contributors who made rustup 1.24.2 possible!
Today marks Rust's sixth birthday since it went 1.0 in 2015. A lot has changed since then and especially over the past year, and Rust was no different. In 2020, there was no foundation yet, no const generics, and a lot of organisations were still wondering whether Rust was production ready.
In the midst of the COVID-19 pandemic, hundreds of Rust's global distributed set of team members and volunteers shipped over nine new stable releases of Rust, in addition to various bugfix releases. Today, "Rust in production" isn't a question, but a statement. The newly founded Rust foundation has several members who value using Rust in production enough to help continue to support and contribute to its open development ecosystem.
We wanted to take today to look back at some of the major improvements over the past year, how the community has been using Rust in production, and finally look ahead at some of the work that is currently ongoing to improve and use Rust for small and large scale projects over the next year. Let's get started!
The Rust language has improved tremendously in the past year, gaining a lot of quality of life features, that while they don't fundamentally change the language, they help make using and maintaining Rust in more places even easier.
const fn
s, and allowing procedural macros to be used in more places, have allowed completely powerful new types of APIs and crates to be created.Rustc wasn't the only tool that had significant improvements.
Each year Rust's growth and adoption in the community and industry has been unbelievable, and this past year has been no exception. Once again in 2020, Rust was voted StackOverflow's Most Loved Programming Language. Thank you to everyone in the community for your support, and help making Rust what it is today.
With the formation of the Rust foundation, Rust has been in a better position to build a sustainable open source ecosystem empowering everyone to build reliable and efficient software. A number of companies that use Rust have formed teams dedicated to maintaining and improving the Rust project, including AWS, Facebook, and Microsoft.
And it isn't just Rust that has been getting bigger. Larger and larger companies have been adopting Rust in their projects and offering officially supported Rust APIs.
Of course, all that is just to start, we're seeing more and more initiatives putting Rust in exciting new places;
rust-gpu
, a new compiler backend that allows writing graphics shaders using Rust for GPUs.Right now the Rust teams are planning and coordinating the 2021 edition of Rust. Much like this past year, a lot of themes of the changes are around improving quality of life. You can check out our recent post about "The Plan for the Rust 2021 Edition" to see what the changes the teams are planning.
And that's just the tip of the iceberg; there are a lot more changes being worked on, and exciting new open projects being started every day in Rust. We can't wait to see what you all build in the year ahead!
Are there changes, or projects from the past year that you're excited about? Are you looking to get started with Rust? Do you want to help contribute to the 2021 edition? Then come on over, introduce yourself, and join the discussion over on our Discourse forum and Zulip chat! Everyone is welcome, we are committed to providing a friendly, safe and welcoming environment for all, regardless of gender, sexual orientation, disability, ethnicity, religion, or similar personal characteristic.