The Rust team is happy to announce a new version of Rust, 1.75.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.75.0 with:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.75.0.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Please report any bugs you might come across!
async fn
and return-position impl Trait
in traitsAs announcedlast week, Rust 1.75 supports use of async fn
and -> impl Trait
in traits. However, this initial release comes with some limitations that are described in the announcement post.
It's expected that these limitations will be lifted in future releases.
Raw pointers (*const T
and *mut T
) used to primarily support operations operating in units of T
. For example, <*const T>::add(1)
would addsize_of::<T>()
bytes to the pointer's address. In some cases, working with byte offsets is more convenient, and these new APIs avoid requiring callers to cast to *const u8
/*mut u8
first.
The Rust compiler continues to get faster, with this release including the application ofBOLT to our binary releases, bringing a 2% mean wall time improvements on our benchmarks. This tool optimizes the layout of the librustc_driver.so
library containing most of the rustc code, allowing for better cache utilization.
We are also now building rustc with -Ccodegen-units=1
, which provides more opportunity for optimizations in LLVM. This optimization brought a separate 1.5% wall time mean win to our benchmarks.
In this release these optimizations are limited to x86_64-unknown-linux-gnu
compilers, but we expect to expand that over time to include more platforms.
These APIs are now stable in const contexts:
Check out everything that changed in Rust, Cargo, and Clippy.
Many people came together to create Rust 1.75.0. We couldn't have done it without all of you. Thanks!
The Rust Async Working Group is excited to announce major progress towards our goal of enabling the use of async fn
in traits. Rust 1.75, which hits stable next week, will include support for both -> impl Trait
notation and async fn
in traits.
This is a big milestone, and we know many users will be itching to try these out in their own code. However, we are still missing some important features that many users need. Read on for recommendations on when and how to use the stabilized features.
Ever since the stabilization of RFC #1522 in Rust 1.26, Rust has allowed users to write impl Trait
as the return type of functions (often called "RPIT"). This means that the function returns "some type that implements Trait
". This is commonly used to return closures, iterators, and other types that are complex or impossible to write explicitly.
/// Given a list of players, return an iterator
/// over their names.
fn player_names(
players: &[Player]
) -> impl Iterator<Item = &String> {
players
.iter()
.map(|p| &p.name)
}
Starting in Rust 1.75, you can use return-position impl Trait
in trait (RPITIT) definitions and in trait impls. For example, you could use this to write a trait method that returns an iterator:
trait Container {
fn items(&self) -> impl Iterator<Item = Widget>;
}
impl Container for MyContainer {
fn items(&self) -> impl Iterator<Item = Widget> {
self.items.iter().cloned()
}
}
So what does all of this have to do with async functions? Well, async functions are "just sugar" for functions that return -> impl Future
. Since these are now permitted in traits, we also permit you to write traits that use async fn
.
trait HttpService {
async fn fetch(&self, url: Url) -> HtmlBody;
// ^^^^^^^^ desugars to:
// fn fetch(&self, url: Url) -> impl Future<Output = HtmlBody>;
}
-> impl Trait
in public traitsThe use of -> impl Trait
is still discouraged for general use in public traits and APIs for the reason that users can't put additional bounds on the return type. For example, there is no way to write this function in a way that is generic over the Container
trait:
fn print_in_reverse(container: impl Container) {
for item in container.items().rev() {
// ERROR: ^^^
// the trait `DoubleEndedIterator`
// is not implemented for
// `impl Iterator<Item = Widget>`
eprintln!("{item}");
}
}
Even though some implementations might return an iterator that implements DoubleEndedIterator
, there is no way for generic code to take advantage of this without defining another trait. In the future we plan to add a solution for this. For now, -> impl Trait
is best used in internal traits or when you're confident your users won't need additional bounds. Otherwise you should consider using an associated type.1
async fn
in public traitsSince async fn
desugars to -> impl Future
, the same limitations apply. In fact, if you use bare async fn
in a public trait today, you'll see a warning.
warning: use of `async fn` in public traits is discouraged as auto trait bounds cannot be specified
--> src/lib.rs:7:5
|
7 | async fn fetch(&self, url: Url) -> HtmlBody;
| ^^^^^
|
help: you can desugar to a normal `fn` that returns `impl Future` and add any desired bounds such as `Send`, but these cannot be relaxed without a breaking API change
|
7 - async fn fetch(&self, url: Url) -> HtmlBody;
7 + fn fetch(&self, url: Url) -> impl std::future::Future<Output = HtmlBody> + Send;
|
Of particular interest to users of async are Send
bounds on the returned future. Since users cannot add bounds later, the error message is saying that you as a trait author need to make a choice: Do you want your trait to work with multithreaded, work-stealing executors?
Thankfully, we have a solution that allows using async fn
in public traits today! We recommend using the trait_variant::make
proc macro to let your users choose. This proc macro is part of the trait-variant crate, published by the rust-lang org. Add it to your project with cargo add trait-variant
, then use it like so:
#[trait_variant::make(HttpService: Send)]
pub trait LocalHttpService {
async fn fetch(&self, url: Url) -> HtmlBody;
}
This creates two versions of your trait: LocalHttpService
for single-threaded executors and HttpService
for multithreaded work-stealing executors. Since we expect the latter to be used more commonly, it has the shorter name in this example. It has additional Send bounds:
pub trait HttpService: Send {
fn fetch(
&self,
url: Url,
) -> impl Future<Output = HtmlBody> + Send;
}
This macro works for async because impl Future
rarely requires additional bounds other than Send, so we can set our users up for success. See the FAQ below for an example of where this is needed.
Traits that use -> impl Trait
and async fn
are not object-safe, which means they lack support for dynamic dispatch. We plan to provide utilities that enable dynamic dispatch in an upcoming version of the trait-variant
crate.
In the future we would like to allow users to add their own bounds to impl Trait
return types, which would make them more generally useful. It would also enable more advanced uses of async fn
. The syntax might look something like this:
trait HttpService = LocalHttpService<fetch(): Send> + Send;
Since these aliases won't require any support on the part of the trait author, it will technically make the Send variants of async traits unnecessary. However, those variants will still be a nice convenience for users, so we expect that most crates will continue to provide them.
Of course, the goals of the Async Working Group don't stop with async fn
in traits. We want to continue building features on top of it that enable more reliable and sophisticated use of async Rust, and we intend to publish a more extensive roadmap in the new year.
-> impl Trait
in traits?For private traits you can use -> impl Trait
freely. For public traits, it's best to avoid them for now unless you can anticipate all the bounds your users might want (in which case you can use #[trait_variant::make]
, as we do for async). We expect to lift this restriction in the future.
#[async_trait]
macro?There are a couple of reasons you might need to continue using async-trait:
As stated above, we hope to enable dynamic dispatch in a future version of the trait-variant
crate.
async fn
in traits? What are the limitations?Assuming you don't need to use #[async_trait]
for one of the reasons stated above, it's totally fine to use regular async fn
in traits. Just remember to use #[trait_variant::make]
if you want to support multithreaded runtimes.
The biggest limitation is that a type must always decide if it implements the Send or non-Send version of a trait. It cannot implement the Send version conditionally on one of its generics. This can come up in the middleware pattern, for example, RequestLimitingService<T>
that is HttpService if T: HttpService
.
#[trait_variant::make]
and Send
bounds?In simple cases you may find that your trait appears to work fine with a multithreaded executor. There are some patterns that just won't work, however. Consider the following:
fn spawn_task(service: impl HttpService + 'static) {
tokio::spawn(async move {
let url = Url::from("https://rust-lang.org");
let _body = service.fetch(url).await;
});
}
Without Send bounds on our trait, this would fail to compile with the error: "future cannot be sent between threads safely". By creating a variant of your trait with Send bounds, you avoid sending your users into this trap.
Note that you won't see a warning if your trait is not public, because if you run into this problem you can always add the Send bounds yourself later.
For a more thorough explanation of the problem, see this blog post.2
Yes, you can freely move between the async fn
and -> impl Future
spelling in your traits and impls. This is true even when one form has a Send bound.3 This makes the traits created by trait_variant
nicer to use.
trait HttpService: Send {
fn fetch(&self, url: Url)
-> impl Future<Output = HtmlBody> + Send;
}
impl HttpService for MyService {
async fn fetch(&self, url: Url) -> HtmlBody {
// This works, as long as `do_fetch(): Send`!
self.client.do_fetch(url).await.into_body()
}
}
impl Future + '_
?For -> impl Trait
in traits we adopted the 2024 Capture Rules early. This means that the + '_
you often see today is unnecessary in traits, because the return type is already assumed to capture input lifetimes. In the 2024 edition this rule will apply to all function signatures. See the linked RFC for more.
-> impl Trait
?If your impl signature includes more detailed information than the trait itself, you'll get a warning:
pub trait Foo {
fn foo(self) -> impl Debug;
}
impl Foo for u32 {
fn foo(self) -> String {
// ^^^^^^
// warning: impl trait in impl method signature does not match trait method signature
self.to_string()
}
}
The reason is that you may be leaking more details of your implementation than you meant to. For instance, should the following code compile?
fn main() {
// Did the implementer mean to allow
// use of `Display`, or only `Debug` as
// the trait says?
println!("{}", 32.foo());
}
Thanks to refined trait implementations it does compile, but the compiler asks you to confirm your intent to refine the trait interface with #[allow(refining_impl_trait)]
on the impl.
The Async Working Group is excited to end 2023 by announcing the completion of our primary goal for the year! Thank you to everyone who helpfully participated in design, implementation, and stabilization discussions. Thanks also to the users of async Rust who have given great feedback over the years. We're looking forward to seeing what you build, and to delivering continued improvements in the years to come.
async fn
in traits, but we decided to cut that from the scope and ship the trait-variant
crate instead. ↩It’s time for the 2023 State of Rust Survey!
Since 2016, the Rust Project has collected valuable information and feedback from the Rust programming language community through our annual State of Rust Survey. This tool allows us to more deeply understand how the Rust Project is performing, how we can better serve the global Rust community, and who our community is composed of.
Like last year, the 2023 State of Rust Survey will likely take you between 10 and 25 minutes, and responses are anonymous. We will accept submissions until Monday, January 15th, 2024. Trends and key insights will be shared on blog.rust-lang.org as soon as possible in 2024.
We invite you to take this year’s survey whether you have just begun using Rust, you consider yourself an intermediate to advanced user, or you have not yet used Rust but intend to one day. Your responses will help us improve Rust over time by shedding light on gaps to fill in the community and development priorities, and more.
Once again, we are offering the State of Rust Survey in the following languages (if you speak multiple languages, please pick one). Language options are available on the main survey page:
Please help us spread the word by sharing the survey link via your social media networks, at meetups, with colleagues, and in any other community that makes sense to you.
This survey would not be possible without the time, resources, and attention of members of the Survey Working Group, the Rust Foundation, and other collaborators. Thank you!
If you have any questions, please see our frequently asked questions.
We appreciate your participation!
Click here to read a summary of last year's survey findings.
The year 2024 is soon to be upon us, and as long-time Rust aficionados know, that means that a new Edition of Rust is on the horizon!
You may be aware that a new version of Rust is released every six weeks. New versions of the language can both add things as well as change things, but only in backwards-compatible ways, according to Rust's1.0 stability guarantee.
But does that mean that Rust can never make backwards-incompatible changes? Not quite! This is what an Edition is: Rust's mechanism for introducing backwards-incompatible changes in a backwards-compatible way. If that sounds like a contradiction, there are three key properties of Editions that preserve the stability guarantee:
In order to keep churn to a minimum, a new Edition of Rust is only released once every three years. We've had the 2015 Edition, the 2018 Edition, the 2021 Edition, and soon, the 2024 Edition. And we could use your help!
We know how much you love Rust, but let's be honest, no language is perfect, and Rust is no exception. So if you've got ideas for how Rust could be better if only that pesky stability guarantee weren't around, now's the time to share! Also note that potential Edition-related changes aren't just limited to the language itself: we'll also consider changes to both Cargo and rustfmt as well.
Please keep in mind that the following criteria determine the sort of changes we're looking for:
cargo fix
), in order to make upgrading to a new Edition as painless as possible.To spark your imagination, here's a real-world example. In the 2015 and 2018 Editions, iterating over a fixed-length array via [foo].into_iter()
will yield references to the iterated elements; this is is surprising because, on other types, calling .into_iter()
produces an iteratorthat yields owned values rather than references. This limitation existed because older versions of Rust lacked the ability to implement traits for all possible fixed-length arrays in a generic way. Once Rust finally became able to express this,all Editions at last gained the ability to iterate over owned values in fixed-length arrays; however, in the specific case of [foo].into_iter()
, altering the existing behavior would have broken lots of code in the wild. Therefore, we used the 2021 Edition to fix this inconsistency for the specific case of [foo].into_iter()
, allowing us to address this long-standing issue while preserving Rust's stability guarantees.
Just like other changes to Rust, Edition-related proposals follow the RFC process, as documented in the Rust RFCs repository. Please follow the process documented there, and please consider publicizing a draft of your RFC to collect preliminary feedback before officially submitting it, in order to expedite the RFC process once you've filed it for real! (And in addition to the venues mentioned in the prior link, please feel free to announce your pre-RFC to our Zulip channel.)
Please file your RFCs as soon as possible! Our goal is to release the 2024 Edition in the second half of 2024, which means we would like to get everything implemented(not only the features themselves, but also all the Edition-related migration tooling) by the end of May, which means that RFCs should be accepted by the end of February. And since RFCs take time to discuss and consider, we strongly encourage you to have your RFC filed by the end of December, or the first week of January at the very latest.
We hope to have periodic updates on the ongoing development of the 2024 Edition. In the meantime, if you have any questions or if you would like to help us make the new Edition a reality, we invite you to come chat in the #edition channel in the Rust Zulip.
Cargo has recently gained an unstable feature on the nightly channel (starting with nightly-2023-11-17) to perform automatic cleaning of cache content within Cargo's home directory. This post includes:
In short, we are asking people who use the nightly channel to enable this feature and report any issues you encounter on the Cargo issue tracker. To enable it, place the following in your Cargo config file (typically located in ~/.cargo/config.toml
or %USERPROFILE%\.cargo\config.toml
for Windows):
[unstable]
gc = true
Or set the CARGO_UNSTABLE_GC=true
environment variable or use the -Zgc
CLI flag to turn it on for individual commands.
We'd particularly like people who use unusual filesystems or environments to give it a try, since there are some parts of the implementation which are sensitive and need battle testing before we turn it on for everyone.
Cargo keeps a variety of cached data within the Cargo home directory. This cache can grow unbounded and can get quite large (easily reaching many gigabytes). Community members have developed tools to manage this cache, such as cargo-cache, but cargo itself never exposed any ability to manage it.
This cache includes:
.crate
files downloaded from a registry..crate
files, which rustc
uses to read the source and compile dependencies.The new garbage collection ("GC") feature adds tracking of this cache data so that cargo can automatically or manually remove unused files. It keeps an SQLite database which tracks the last time the various cache elements have been used. Every time you run a cargo command that reads or writes any of this cache data, it will update the database with a timestamp of when that data was last used.
What isn't yet included is cleaning of target directories, see Plan for the future.
When you run cargo, once a day it will inspect the last-use cache tracker, and determine if any cache elements have not been used in a while. If they have not, then they will be automatically deleted. This happens with most commands that would normally perform significant work, like cargo build
or cargo fetch
.
The default is to delete data that can be locally recreated if it hasn't been used for 1 month, and to delete data that has to be re-downloaded after 3 months.
Automatic deletion is disabled if cargo is offline such as with --offline
or --frozen
to avoid deleting artifacts that may need to be used if you are offline for a long period of time.
The initial implementation has exposed a variety of configuration knobs to control how automatic cleaning works. However, it is unlikely we will expose too many low-level details when it is stabilized, so this may change in the future (see issue #13061). See the Automatic garbage collection section for more details on this configuration.
If you want to manually delete data from the cache, several options have been added under the cargo clean gc
subcommand. This subcommand can be used to perform the normal automatic daily cleaning, or to specify different options on which data to remove. There are several options for specifying the age of data to delete (such as --max-download-age=3days
) or specifying the maximum size of the cache (such as --max-download-size=1GiB
). See the Manual garbage collection section or run cargo clean gc --help
for more details on which options are supported.
This CLI design is only preliminary, and we are looking at determining what the final design will look like when it is stabilized, see issue #13060.
After enabling the gc feature, just go about your normal business of using cargo. You should be able to observe the SQLite database stored in your cargo home directory at ~/.cargo/.global-cache
.
After the first time you use cargo, it will populate the database tracking all the data that already exists in your cargo home directory. Then, after 1 month, cargo should start deleting old data, and after 3 months will delete even more data.
The end result is that after that period of time you should start to notice the home directory using less space overall.
You can also try out the cargo clean gc
command and explore some of its options if you want to try to manually delete some data.
If you run into problems, you can disable the gc feature and cargo should return to its previous behavior. Please let us know on the issue tracker if this happens.
We'd like to hear from you about your experience using this feature. Some of the things we are interested in are:
Or if you would prefer to share your experiences on Zulip, head over to the #t-cargo stream.
(These sections are only for the intently curious among you.)
The implementation of this feature had to consider several constraints to try to ensure that it works in nearly all environments, and doesn't introduce a negative experience for users.
One big focus was to make sure that the performance of each invocation of cargo
is not significantly impacted. Cargo needs to potentially save a large chunk of data every time it runs. The performance impact will heavily depend on the number of dependencies and your filesystem. Preliminary testing shows the impact can be anywhere from 0 to about 50ms.
In order to minimize the performance impact of actually deleting files, the automatic GC runs only once a day. This is intended to balance keeping the cache clean without impacting the performance of daily use.
Another big focus is dealing with cache locking. Previously, cargo had a single lock on the package cache, which cargo would hold while downloading registry data and performing dependency resolution. When cargo is actually running rustc
, it previously did not hold a lock under the assumption that existing cache data will not be modified.
However, now that cargo can modify or delete existing cache data, it needs to be careful to coordinate with anything that might be reading from the cache, such as if multiple cargo commands are run simultaneously. To handle this, cargo now has two separate locks, which are used together to provide three separate locking states. There is a shared read lock, which allows multiple builds to run in parallel and read from the cache. There is a write lock held while downloading registry data, which is independent of the read lock which allows concurrent builds to still run while new packages are downloaded. The third state is a write lock that prevents either of the two previous locks from being held, and ensures exclusive access while cleaning the cache.
Versions of cargo before 1.75 don't know about the exclusive write lock. We are hoping that in practice it will be rare to concurrently run old and new cargo versions, and that it is unlikely that the automatic GC will need to delete data that is concurrently in use by an older version.
Because we do not want problems with GC from disrupting users, the implementation silently skips the GC if it is unable to acquire an exclusive lock on the package cache. Similarly, when cargo saves the timestamp data on every command, it will silently ignore errors if it is unable to open the database, such as if it is on a read-only filesystem, or it is unable to acquire a write lock. This may result in the last-use timestamps becoming stale, but hopefully this should not impact most usage scenarios. For locking, we are paying special attention to scenarios such as Docker container mounts and network filesystems with questionable locking support.
Since the cache is used by any version of cargo, we have to pay close attention to forwards and backwards compatibility. We benefit from SQLite's particularly stable on-disk data format which has been stable since 2004. Cargo has support to do schema migrations within the database that stay backwards compatible.
A major aspect of this endeavor is to gain experience with using SQLite in a wide variety of environments, with a plan to extend its usage in several other parts of cargo.
One place where we are looking to introduce SQLite is for the registry index cache. When cargo downloads registry index data, it stores it in a custom-designed binary file format to improve lookup performance. However, this index cache uses many small files, which may not perform well on some filesystems.
Additionally, the index cache grows without bound. Currently the automatic cache cleaning will only delete an entire index cache if the index itself hasn't been used, which is rarely the case for crates.io. We may also need to consider finer-grained timestamp tracking or some mechanism to periodically purge this data.
Another place we are looking to introduce SQLite is for managing the target directory. In cargo's target directory, cargo keeps track of information about each crate that has been built with what is called a fingerprint. These fingerprints help cargo know if it needs to recompile something. Each artifact is tracked with a set of 4 files, using a mixture of custom formats.
We are looking to replace this system with SQLite which will hopefully bring about several improvements. A major focus will be to provide cleaning of stale data in the target directory, which tends to use substantial amount of disk space. Additionally we are looking to implement other improvements, such as more accurate fingerprint tracking, provide information about why cargo thinks something needed to be recompiled, and to hopefully improve performance. This will be important for the script feature, which uses a global cache for build artifacts, and the future implementation of a globally-shared build cache.
The year 2024 is soon to be upon us, and as long-time Rust aficionados know, that means that a new Edition of Rust is on the horizon!
You may be aware that a new version of Rust is released every six weeks. New versions of the language can both add things as well as change things, but only in backwards-compatible ways, according to Rust's1.0 stability guarantee.
But does that mean that Rust can never make backwards-incompatible changes? Not quite! This is what an Edition is: Rust's mechanism for introducing backwards-incompatible changes in a backwards-compatible way. If that sounds like a contradiction, there are three key properties of Editions that preserve the stability guarantee:
In order to keep churn to a minimum, a new Edition of Rust is only released once every three years. We've had the 2015 Edition, the 2018 Edition, the 2021 Edition, and soon, the 2024 Edition. And we could use your help!
We know how much you love Rust, but let's be honest, no language is perfect, and Rust is no exception. So if you've got ideas for how Rust could be better if only that pesky stability guarantee weren't around, now's the time to share! Also note that potential Edition-related changes aren't just limited to the language itself: we'll also consider changes to both Cargo and rustfmt as well.
Please keep in mind that the following criteria determine the sort of changes we're looking for:
cargo fix
), in order to make upgrading to a new Edition as painless as possible.To spark your imagination, here's a real-world example. In the 2015 and 2018 Editions, iterating over a fixed-length array via [foo].into_iter()
will yield references to the iterated elements; this is is surprising because, on other types, calling .into_iter()
produces an iteratorthat yields owned values rather than references. This limitation existed because older versions of Rust lacked the ability to implement traits for all possible fixed-length arrays in a generic way. Once Rust finally became able to express this,all Editions at last gained the ability to iterate over owned values in fixed-length arrays; however, in the specific case of [foo].into_iter()
, altering the existing behavior would have broken lots of code in the wild. Therefore, we used the 2021 Edition to fix this inconsistency for the specific case of [foo].into_iter()
, allowing us to address this long-standing issue while preserving Rust's stability guarantees.
Just like other changes to Rust, Edition-related proposals follow the RFC process, as documented in the Rust RFCs repository. Please follow the process documented there, and please consider publicizing a draft of your RFC to collect preliminary feedback before officially submitting it, in order to expedite the RFC process once you've filed it for real! (And in addition to the venues mentioned in the prior link, please feel free to announce your pre-RFC to our Zulip channel.)
Please file your RFCs as soon as possible! Our goal is to release the 2024 Edition in the second half of 2024, which means we would like to get everything implemented(not only the features themselves, but also all the Edition-related migration tooling) by the end of May, which means that RFCs should be accepted by the end of February. And since RFCs take time to discuss and consider, we strongly encourage you to have your RFC filed by the end of December, or the first week of January at the very latest.
We hope to have periodic updates on the ongoing development of the 2024 Edition. In the meantime, if you have any questions or if you would like to help us make the new Edition a reality, we invite you to come chat in the #edition channel in the Rust Zulip.
The Rust team has published a new point release of Rust, 1.74.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.74.1 is as easy as:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website.
1.74.1 resolves a few regressions introduced in 1.74.0:
Many people came together to create Rust 1.74.1. We couldn't have done it without all of you. Thanks!
The Rust team is happy to announce a new version of Rust, 1.74.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.74.0 with:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.74.0 on GitHub.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta
) or the nightly channel (rustup default nightly
). Please report any bugs you might come across!
As proposed in RFC 3389, the Cargo.toml
manifest now supports a [lints]
table to configure the reporting level (forbid, deny, warn, allow) for lints from the compiler and other tools. So rather than setting RUSTFLAGS
with -F
/-D
/-W
/-A
, which would affect the entire build, or using crate-level attributes like:
#![forbid(unsafe_code)]
#![deny(clippy::enum_glob_use)]
You can now write those in your package manifest for Cargo to handle:
[lints.rust]
unsafe_code = "forbid"
[lints.clippy]
enum_glob_use = "deny"
These can also be configured in a [workspace.lints]
table, then inherited by[lints] workspace = true
like many other workspace settings. Cargo will also track changes to these settings when deciding which crates need to be rebuilt.
For more information, see the lints and workspace.lints sections of the Cargo reference manual.
Two more related Cargo features are included in this release: credential providers and authenticated private registries.
Credential providers allow configuration of how Cargo gets credentials for a registry. Built-in providers are included for OS-specific secure secret storage on Linux, macOS, and Windows. Additionally, custom providers can be written to support arbitrary methods of storing or generating tokens. Using a secure credential provider reduces risk of registry tokens leaking.
Registries can now optionally require authentication for all operations, not just publishing. This enables private Cargo registries to offer more secure hosting of crates. Use of private registries requires the configuration of a credential provider.
For further information, see theCargo docs.
If you have ever received the error that a "return type cannot contain a projection or Self
that references lifetimes from a parent scope," you may now rest easy! The compiler now allows mentioning Self
and associated types in opaque return types, like async fn
and -> impl Trait
. This is the kind of feature that gets Rust closer to how you might just_expect_ it to work, even if you have no idea about jargon like "projection".
This functionality had an unstable feature gate because its implementation originally didn't properly deal with captured lifetimes, and once that was fixed it was given time to make sure it was sound. For more technical details, see the stabilization pull request, which describes the following examples that are all now allowed:
struct Wrapper<'a, T>(&'a T);
// Opaque return types that mention `Self`:
impl Wrapper<'_, ()> {
async fn async_fn() -> Self { /* ... */ }
fn impl_trait() -> impl Iterator<Item = Self> { /* ... */ }
}
trait Trait<'a> {
type Assoc;
fn new() -> Self::Assoc;
}
impl Trait<'_> for () {
type Assoc = ();
fn new() {}
}
// Opaque return types that mention an associated type:
impl<'a, T: Trait<'a>> Wrapper<'a, T> {
async fn mk_assoc() -> T::Assoc { /* ... */ }
fn a_few_assocs() -> impl Iterator<Item = T::Assoc> { /* ... */ }
}
These APIs are now stable in const contexts:
Check out everything that changed in Rust, Cargo, and Clippy.
Many people came together to create Rust 1.74.0. We couldn't have done it without all of you. Thanks!
The Rust compiler's front-end can now use parallel execution to significantly reduce compile times. To try it, run the nightly compiler with the -Z threads=8
option. This feature is currently experimental, and we aim to ship it in the stable compiler in 2024.
Keep reading to learn why a parallel front-end is needed and how it works, or just skip ahead to the How to use itsection.
Rust compile times are a perennial concern. The Compiler Performance Working Grouphas continually improved compiler performance for several years. For example, in the first 10 months of 2023, there were mean reductions in compile time of13%, in peak memory use of15%, and in binary size of7%, as measured by our performance suite.
However, at this point the compiler has been heavily optimized and new improvements are hard to find. There is no low-hanging fruit remaining.
But there is one piece of large but high-hanging fruit: parallelism. Current Rust compiler users benefit from two kinds of parallelism, and the newly parallel front-end adds a third kind.
When you compile a Rust program, Cargo launches multiple rustc processes, compiling multiple crates in parallel. This works well. Try compiling a large Rust program with the -j1
flag to disable this parallelization and it will take a lot longer than normal.
You can visualise this parallelism if you build with Cargo's--timings flag, which produces a chart showing how the crates are compiled. The following image shows the timeline when building ripgrep on a machine with 28 virtual cores.
There are 60 horizontal lines, each one representing a distinct process. Their durations range from a fraction of a second to multiple seconds. Most of them are rustc, and the few orange ones are build scripts. The first twenty run all start at the same time. This is possible because there are no dependencies between the relevant crates. But further down the graph, parallelism reduces as crate dependencies increase. Although the compiler can overlap compilation of dependent crates somewhat thanks to a feature called pipelined compilation, there is much less parallel execution happening towards the end of compilation, and this is typical for large Rust programs. Interprocess parallelism is not enough to take full advantage of many cores. For more speed, we need parallelism within each process.
The compiler is split into two halves: the front-end and the back-end.
The front-end does many things, including parsing, type checking, and borrow checking. Until this week, it could not use parallel execution.
The back-end performs code generation. It generates code in chunks called "codegen units" and then LLVM processes these in parallel. This is a form of coarse-grained parallelism.
We can visualize the difference between the serial front-end and the parallel back-end. The following image shows the output of a profiler calledSamply measuring rustc as it does a release build of the final crate in Cargo. The image is superimposed with markers that indicate front-end and back-end execution.
Each horizontal line represents a thread. The main thread is labelled "rustc" and is shown at the bottom. It is busy for most of the execution. The other 16 threads are LLVM threads, labelled "opt cgu.00" through to "opt cgu.15". There are 16 threads because 16 is the default number of codegen units for a release build.
There are several things worth noting.
The front-end is now capable of parallel execution. It usesRayon to perform compilation tasks using fine-grained parallelism. Many data structures are synchronized by mutexes and read-write locks, atomic types are used where appropriate, and many front-end operations are made parallel. The addition of parallelism was done by modifying a relatively small number of key points in the code. The vast majority of the front-end code did not need to be changed.
When the parallel front-end is enabled and configured to use eight threads, we get the following Samply profile when compiling the same example as before.
Again, there are several things worth nothing.
Rust compilation has long benefited from interprocess parallelism, via Cargo, and from intraprocess parallelism in the back-end. It can now also benefit from intraprocess parallelism in the front-end.
You might wonder how interprocess parallelism and intraprocess parallelism interact. If we have 20 parallel rustc invocations and each one can have up to 16 threads running, could we end up with hundreds of threads on a machine with only tens of cores, resulting in inefficient execution as the OS tries its best to schedule them?
Fortunately no. The compiler uses the jobserver protocolto limit the number of threads it creates. If a lot of interprocess parallelism is occuring, intraprocess parallelism will be limited appropriately, and the number of threads will not exceed the number of cores.
The nightly compiler is now shipping with the parallel front-end enabled. However, by default it runs in single-threaded mode and won't reduce compile times.
Keen users can opt into multi-threaded mode with the -Z threads
option. For example:
$ RUSTFLAGS="-Z threads=8" cargo build --release
Alternatively, to opt in from aconfig.toml file (for one or more projects), add these lines:
[build]
rustflags = ["-Z", "threads=8"]
It may be surprising that single-threaded mode is the default. Why parallelize the front-end and then run it in single-threaded mode? The answer is simple: caution. This is a big change! The parallel front-end has a lot of new code. Single-threaded mode exercises most of the new code, but excludes the possibility of threading bugs such as deadlocks that can affect multi-threaded mode. Even in Rust, parallel programs are harder to write correctly than serial programs. For this reason the parallel front-end also won't be shipped in beta or stable releases for some time.
When the parallel front-end is run in single-threaded mode, compilation times are typically 0% to 2% slower than with the serial front-end. This should be barely noticeable.
When the parallel front-end is run in multi-threaded mode with -Z threads=8
, our measurements on real-world code show that compile times can be reduced by up to 50%, though the effects vary widely and depend on the characteristics of the code and its build configuration. For example, dev builds are likely to see bigger improvements than release builds because release builds usually spend more time doing optimizations in the back-end. A small number of cases compile more slowly in multi-threaded mode than single-threaded mode. These are mostly tiny programs that already compile quickly.
We recommend eight threads because this is the configuration we have tested the most and it is known to give good results. Values lower than eight will see smaller benefits. Values greater than eight will give diminishing returns and may even give worse performance.
If a 50% improvement seems low when going from one to eight threads, recall from the explanation above that the front-end only accounts for part of compile times, and the back-end is already parallel. You can't beat Amdahl's Law.
Memory usage can increase significantly in multi-threaded mode. We have seen increases of up to 35%. This is unsurprising given that various parts of compilation, each of which requires a certain amount of memory, are now executing in parallel.
Reliability in single-threaded mode should be high.
In multi-threaded mode there are some known bugs, including deadlocks. If compilation hangs, you have probably hit one of them.
If you have any problems with the parallel front-end, please check the issues marked with the "WG-compiler-parallel" label. If your problem does not match any of the existing issues, please file a new issue.
For more general feedback, please start a discussion on the wg-parallel-rustc Zulip channel. We are particularly interested to hear the performance effects on the code you care about.
We are working to improve the performance of the parallel front-end. As the graphs above showed, there is room to improve the utilization of the threads in the front-end. We are also ironing out the remaining bugs in multi-threaded mode.
We aim to stabilize the -Z threads
option and ship the parallel front-end running by default in multi-threaded mode on stable releases in 2024.
The parallel front-end has been under development for a long time. It was started by @Zoxc, who also did most of the work for several years. After a period of inactivity, the project was revived this year by @SparrowLii, who led the effort to get it shipped. Other members of the Parallel Rustc Working Group have also been involved with reviews and other activities. Many thanks to everyone involved.
The "non-canonical downloads" feature allows everyone to download the serde_derive
crate from https://crates.io/api/v1/crates/serde%5Fderive/1.0.189/download, but also from https://crates.io/api/v1/crates/SERDE-derive/1.0.189/download, where the underscore was replaced with a hyphen (crates.io normalizes underscores and hyphens to be the same for uniqueness purposes, so it isn't possible to publish a crate named serde-derive
because serde_derive
exists) and parts of the crate name are using uppercase characters. The same also works vice versa, if the canonical crate name uses hyphens and the download URL uses underscores instead. It even works with any other combination for crates that have multiple such characters (please don't mix them…!).
Supporting such non-canonical download requests means that the crates.io server needs to perform a database lookup for every download request to figure out the canonical crate name. The canonical crate name is then used to construct a download URL and the client is HTTP-redirected to that URL.
While we have introduced a caching layer some time ago to address some of the performance concerns, having all download requests go through our backend servers has still started to become problematic and at the current rate of growth will not become any easier in the future.
Having to support "non-canonical downloads" however prevents us from using CDNs directly for the download requests, so if we can remove support for non-canonical download requests, it will unlock significant performance and reliability gains.
cargo
always uses the canonical crate name from the package index to construct the corresponding download URLs. If support was removed for this on the crates.io side then cargo would still work exactly the same as before.
Looking at the crates.io request logs, the following user-agents are currently relying on "non-canonical downloads" support:
Three of these are just generic HTTP client libraries. GNU Guile is apparently a programming language, so most likely this is also a generic user-agent from a custom user program.
cargo-binstall is a tool enabling installation of binary artifacts of crates. The maintainer is already aware of the upcoming change and confirmed that more recent versions of cargo-binstall
should not be affected by this change.
We recommend that any scripts relying on non-canonical downloads be adjusted to use the canonical names from the package index, the database dump, or the crates.io API instead. If you don't know which data source is best suited for you, we welcome you to take a look at the crates.io data access page.
Note that we will still need the database query for download counting purposes for now. We have plans to remove this requirement as well, but those efforts are blocked by us still supporting non-canonical downloads.
If you want to follow the progress on implementing these changes or if you have comments you can subscribe to the corresponding tracking issue. Related discussions are also happening on the crates.io Zulip stream.