Rust Blog: Posts

Rust Blog

Announcing Rust 1.76.0

The Rust team is happy to announce a new version of Rust, 1.76.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.76.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.76.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.76.0 stable

This release is relatively minor, but as always, even incremental improvements lead to a greater whole. A few of those changes are highlighted in this post, and others may yet fill more niche needs.

ABI compatibility updates

A new ABI Compatibility section in the function pointer documentation describes what it means for function signatures to be ABI-compatible. A large part of that is the compatibility of argument types and return types, with a list of those that are currently considered compatible in Rust. For the most part, this documentation is not adding any new guarantees, only describing the existing state of compatibility.

The one new addition is that it is now guaranteed that char and u32 are ABI compatible. They have always had the same size and alignment, but now they are considered equivalent even in function call ABI, consistent with the documentation above.

Type names from references

For debugging purposes, any::type_name::() has been available since Rust 1.38 to return a string description of the type T, but that requires an explicit type parameter. It is not always easy to specify that type, especially for unnameable types like closures or for opaque return types. The new type_name_of_val(&T) offers a way to get a descriptive name from any reference to a type.

fn get_iter() -> impl Iterator<Item = i32> {
    [1, 2, 3].into_iter()
}

fn main() {
    let iter = get_iter();
    let iter_name = std::any::type_name_of_val(&iter);
    let sum: i32 = iter.sum();
    println!("The sum of the `{iter_name}` is {sum}.");
}

This currently prints:

The sum of the `core::array::iter::IntoIter<i32, 3>` is 6.

Stabilized APIs

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.76.0

Many people came together to create Rust 1.76.0. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

crates.io: API status code changes

Cargo and crates.io were developed in the rush leading up to the Rust 1.0 release to fill the needs for a tool to manage dependencies and a registry that people could use to share code. This rapid work resulted in these tools being connected with an API that initially didn't return the correct HTTP response status codes. After the Rust 1.0 release, Rust's stability guarantees around backward compatibility made this non-trivial to fix, as we wanted older versions of Cargo to continue working with the current crates.io API.

When an old version of Cargo receives a non-"200 OK" response, it displays the raw JSON body like this:

error: failed to get a 200 OK response, got 400
headers:
    HTTP/1.1 400 Bad Request
    Content-Type: application/json; charset=utf-8
    Content-Length: 171

body:
{"errors":[{"detail":"missing or empty metadata fields: description, license. Please see https://doc.rust-lang.org/cargo/reference/manifest.html for how to upload metadata"}]}

This was improved in pull request #6771, which was released in Cargo 1.34 (mid-2019). Since then, Cargo has supported receiving 4xx and 5xx status codes too and extracts the error message from the JSON response, if available.

On 2024-03-04 we will switch the API from returning "200 OK" status codes for errors to the new 4xx/5xx behavior. Cargo 1.33 and below will keep working after this change, but will show the raw JSON body instead of a nicely formatted error message. We feel confident that this degraded error message display will not affect very many users. According to the crates.io request logs only very few requests are made by Cargo 1.33 and older versions.

This is the list of API endpoints that will be affected by this change:

  • GET /api/v1/crates
  • PUT /api/v1/crates/new
  • PUT /api/v1/crates/:crate/:version/yank
  • DELETE /api/v1/crates/:crate/:version/unyank
  • GET /api/v1/crates/:crate/owners
  • PUT /api/v1/crates/:crate/owners
  • DELETE /api/v1/crates/:crate/owners

All other endpoints have already been using regular HTTP status codes for some time.

If you are still using Cargo 1.33 or older, we recommend upgrading to a newer version to get the improved error messages and all the other nice things that the Cargo team has built since then.

Continue Reading…

Rust Blog

Announcing Rust 1.75.0

The Rust team is happy to announce a new version of Rust, 1.75.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.75.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.75.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.75.0 stable

async fn and return-position impl Trait in traits

As announcedlast week, Rust 1.75 supports use of async fn and -> impl Trait in traits. However, this initial release comes with some limitations that are described in the announcement post.

It's expected that these limitations will be lifted in future releases.

Pointer byte offset APIs

Raw pointers (*const T and *mut T) used to primarily support operations operating in units of T. For example, <*const T>::add(1) would addsize_of::<T>() bytes to the pointer's address. In some cases, working with byte offsets is more convenient, and these new APIs avoid requiring callers to cast to *const u8/*mut u8 first.

Code layout optimizations for rustc

The Rust compiler continues to get faster, with this release including the application ofBOLT to our binary releases, bringing a 2% mean wall time improvements on our benchmarks. This tool optimizes the layout of the librustc_driver.so library containing most of the rustc code, allowing for better cache utilization.

We are also now building rustc with -Ccodegen-units=1, which provides more opportunity for optimizations in LLVM. This optimization brought a separate 1.5% wall time mean win to our benchmarks.

In this release these optimizations are limited to x86_64-unknown-linux-gnucompilers, but we expect to expand that over time to include more platforms.

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.75.0

Many people came together to create Rust 1.75.0. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

Announcing `async fn` and return-position `impl Trait` in traits

The Rust Async Working Group is excited to announce major progress towards our goal of enabling the use of async fn in traits. Rust 1.75, which hits stable next week, will include support for both -> impl Trait notation and async fn in traits.

This is a big milestone, and we know many users will be itching to try these out in their own code. However, we are still missing some important features that many users need. Read on for recommendations on when and how to use the stabilized features.

What's stabilizing

Ever since the stabilization of RFC #1522 in Rust 1.26, Rust has allowed users to write impl Trait as the return type of functions (often called "RPIT"). This means that the function returns "some type that implements Trait". This is commonly used to return closures, iterators, and other types that are complex or impossible to write explicitly.

/// Given a list of players, return an iterator
/// over their names.
fn player_names(
    players: &[Player]
) -> impl Iterator<Item = &String> {
    players
        .iter()
        .map(|p| &p.name)
}

Starting in Rust 1.75, you can use return-position impl Trait in trait (RPITIT) definitions and in trait impls. For example, you could use this to write a trait method that returns an iterator:

trait Container {
    fn items(&self) -> impl Iterator<Item = Widget>;
}

impl Container for MyContainer {
    fn items(&self) -> impl Iterator<Item = Widget> {
        self.items.iter().cloned()
    }
}

So what does all of this have to do with async functions? Well, async functions are "just sugar" for functions that return -> impl Future. Since these are now permitted in traits, we also permit you to write traits that use async fn.

trait HttpService {
    async fn fetch(&self, url: Url) -> HtmlBody;
//  ^^^^^^^^ desugars to:
//  fn fetch(&self, url: Url) -> impl Future<Output = HtmlBody>;
}

Where the gaps lie

-> impl Trait in public traits

The use of -> impl Trait is still discouraged for general use in public traits and APIs for the reason that users can't put additional bounds on the return type. For example, there is no way to write this function in a way that is generic over the Container trait:

fn print_in_reverse(container: impl Container) {
    for item in container.items().rev() {
        // ERROR:                 ^^^
        // the trait `DoubleEndedIterator`
        // is not implemented for
        // `impl Iterator<Item = Widget>`
        eprintln!("{item}");
    }
}

Even though some implementations might return an iterator that implements DoubleEndedIterator, there is no way for generic code to take advantage of this without defining another trait. In the future we plan to add a solution for this. For now, -> impl Trait is best used in internal traits or when you're confident your users won't need additional bounds. Otherwise you should consider using an associated type.1

async fn in public traits

Since async fn desugars to -> impl Future, the same limitations apply. In fact, if you use bare async fn in a public trait today, you'll see a warning.

warning: use of `async fn` in public traits is discouraged as auto trait bounds cannot be specified
 --> src/lib.rs:7:5
  |
7 |     async fn fetch(&self, url: Url) -> HtmlBody;
  |     ^^^^^
  |
help: you can desugar to a normal `fn` that returns `impl Future` and add any desired bounds such as `Send`, but these cannot be relaxed without a breaking API change
  |
7 -     async fn fetch(&self, url: Url) -> HtmlBody;
7 +     fn fetch(&self, url: Url) -> impl std::future::Future<Output = HtmlBody> + Send;
  |

Of particular interest to users of async are Send bounds on the returned future. Since users cannot add bounds later, the error message is saying that you as a trait author need to make a choice: Do you want your trait to work with multithreaded, work-stealing executors?

Thankfully, we have a solution that allows using async fn in public traits today! We recommend using the trait_variant::make proc macro to let your users choose. This proc macro is part of the trait-variant crate, published by the rust-lang org. Add it to your project with cargo add trait-variant, then use it like so:

#[trait_variant::make(HttpService: Send)]
pub trait LocalHttpService {
    async fn fetch(&self, url: Url) -> HtmlBody;
}

This creates two versions of your trait: LocalHttpService for single-threaded executors and HttpService for multithreaded work-stealing executors. Since we expect the latter to be used more commonly, it has the shorter name in this example. It has additional Send bounds:

pub trait HttpService: Send {
    fn fetch(
        &self,
        url: Url,
    ) -> impl Future<Output = HtmlBody> + Send;
}

This macro works for async because impl Future rarely requires additional bounds other than Send, so we can set our users up for success. See the FAQ below for an example of where this is needed.

Dynamic dispatch

Traits that use -> impl Trait and async fn are not object-safe, which means they lack support for dynamic dispatch. We plan to provide utilities that enable dynamic dispatch in an upcoming version of the trait-variant crate.

How we hope to improve in the future

In the future we would like to allow users to add their own bounds to impl Trait return types, which would make them more generally useful. It would also enable more advanced uses of async fn. The syntax might look something like this:

trait HttpService = LocalHttpService<fetch(): Send> + Send;

Since these aliases won't require any support on the part of the trait author, it will technically make the Send variants of async traits unnecessary. However, those variants will still be a nice convenience for users, so we expect that most crates will continue to provide them.

Of course, the goals of the Async Working Group don't stop with async fn in traits. We want to continue building features on top of it that enable more reliable and sophisticated use of async Rust, and we intend to publish a more extensive roadmap in the new year.

Frequently asked questions

Is it okay to use -> impl Trait in traits?

For private traits you can use -> impl Trait freely. For public traits, it's best to avoid them for now unless you can anticipate all the bounds your users might want (in which case you can use #[trait_variant::make], as we do for async). We expect to lift this restriction in the future.

Should I still use the #[async_trait] macro?

There are a couple of reasons you might need to continue using async-trait:

  • You want to support Rust versions older than 1.75.
  • You want dynamic dispatch.

As stated above, we hope to enable dynamic dispatch in a future version of the trait-variant crate.

Is it okay to use async fn in traits? What are the limitations?

Assuming you don't need to use #[async_trait] for one of the reasons stated above, it's totally fine to use regular async fn in traits. Just remember to use #[trait_variant::make] if you want to support multithreaded runtimes.

The biggest limitation is that a type must always decide if it implements the Send or non-Send version of a trait. It cannot implement the Send version conditionally on one of its generics. This can come up in the middleware pattern, for example, RequestLimitingService<T> that is HttpService if T: HttpService.

Why do I need #[trait_variant::make] and Send bounds?

In simple cases you may find that your trait appears to work fine with a multithreaded executor. There are some patterns that just won't work, however. Consider the following:

fn spawn_task(service: impl HttpService + 'static) {
    tokio::spawn(async move {
        let url = Url::from("https://rust-lang.org");
        let _body = service.fetch(url).await;
    });
}

Without Send bounds on our trait, this would fail to compile with the error: "future cannot be sent between threads safely". By creating a variant of your trait with Send bounds, you avoid sending your users into this trap.

Note that you won't see a warning if your trait is not public, because if you run into this problem you can always add the Send bounds yourself later.

For a more thorough explanation of the problem, see this blog post.2

Can I mix async fn and impl trait?

Yes, you can freely move between the async fn and -> impl Future spelling in your traits and impls. This is true even when one form has a Send bound.3 This makes the traits created by trait_variant nicer to use.

trait HttpService: Send {
    fn fetch(&self, url: Url)
    -> impl Future<Output = HtmlBody> + Send;
}

impl HttpService for MyService {
    async fn fetch(&self, url: Url) -> HtmlBody {
        // This works, as long as `do_fetch(): Send`!
        self.client.do_fetch(url).await.into_body()
    }
}

Why don't these signatures use impl Future + '_?

For -> impl Trait in traits we adopted the 2024 Capture Rules early. This means that the + '_ you often see today is unnecessary in traits, because the return type is already assumed to capture input lifetimes. In the 2024 edition this rule will apply to all function signatures. See the linked RFC for more.

Why am I getting a "refine" warning when I implement a trait with -> impl Trait?

If your impl signature includes more detailed information than the trait itself, you'll get a warning:

pub trait Foo {
    fn foo(self) -> impl Debug;
}

impl Foo for u32 {
    fn foo(self) -> String {
//                  ^^^^^^
//  warning: impl trait in impl method signature does not match trait method signature
        self.to_string()
    }
}

The reason is that you may be leaking more details of your implementation than you meant to. For instance, should the following code compile?

fn main() {
    // Did the implementer mean to allow
    // use of `Display`, or only `Debug` as
    // the trait says?
    println!("{}", 32.foo());
}

Thanks to refined trait implementations it does compile, but the compiler asks you to confirm your intent to refine the trait interface with #[allow(refining_impl_trait)] on the impl.

Conclusion

The Async Working Group is excited to end 2023 by announcing the completion of our primary goal for the year! Thank you to everyone who helpfully participated in design, implementation, and stabilization discussions. Thanks also to the users of async Rust who have given great feedback over the years. We're looking forward to seeing what you build, and to delivering continued improvements in the years to come.

  1. Note that associated types can only be used in cases where the type is nameable. This restriction will be lifted once impl_trait_in_assoc_type is stabilized.
  2. Note that in that blog post we originally said we would solve the Send bound problem before shipping async fn in traits, but we decided to cut that from the scope and ship the trait-variant crate instead.
  3. This works because of auto-trait leakage, which allows knowledge of auto traits to "leak" from an item whose signature does not specify them.

Continue Reading…

Rust Blog

Launching the 2023 State of Rust Survey

It’s time for the 2023 State of Rust Survey!

Since 2016, the Rust Project has collected valuable information and feedback from the Rust programming language community through our annual State of Rust Survey. This tool allows us to more deeply understand how the Rust Project is performing, how we can better serve the global Rust community, and who our community is composed of.

Like last year, the 2023 State of Rust Survey will likely take you between 10 and 25 minutes, and responses are anonymous. We will accept submissions until Monday, January 15th, 2024. Trends and key insights will be shared on blog.rust-lang.org as soon as possible in 2024.

We invite you to take this year’s survey whether you have just begun using Rust, you consider yourself an intermediate to advanced user, or you have not yet used Rust but intend to one day. Your responses will help us improve Rust over time by shedding light on gaps to fill in the community and development priorities, and more.

Once again, we are offering the State of Rust Survey in the following languages (if you speak multiple languages, please pick one). Language options are available on the main survey page:

  • English
  • Simplified Chinese
  • French
  • German
  • Japanese
  • Russian
  • Spanish

Please help us spread the word by sharing the survey link via your social media networks, at meetups, with colleagues, and in any other community that makes sense to you.

This survey would not be possible without the time, resources, and attention of members of the Survey Working Group, the Rust Foundation, and other collaborators. Thank you!

If you have any questions, please see our frequently asked questions.

We appreciate your participation!

Click here to read a summary of last year's survey findings.

Continue Reading…

Rust Blog

A Call for Proposals for the Rust 2024 Edition

The year 2024 is soon to be upon us, and as long-time Rust aficionados know, that means that a new Edition of Rust is on the horizon!

What is an Edition?

You may be aware that a new version of Rust is released every six weeks. New versions of the language can both add things as well as change things, but only in backwards-compatible ways, according to Rust's1.0 stability guarantee.

But does that mean that Rust can never make backwards-incompatible changes? Not quite! This is what an Edition is: Rust's mechanism for introducing backwards-incompatible changes in a backwards-compatible way. If that sounds like a contradiction, there are three key properties of Editions that preserve the stability guarantee:

  1. Editions are opt-in; crates only receive breaking changes if its authors explicitly ask for them.
  2. Crates that use older editions never get left behind; a crate written for the original Rust 2015 Edition is still supported by every Rust release, and can still make use of all the new goodies that accompany each new version, e.g. new library APIs, compiler optimizations, etc.
  3. An Edition never splits the library ecosystem; crates using new Editions can depend on crates using old Editions (and vice-versa!), so nobody ever has to worry about Edition-related incompatibility.

In order to keep churn to a minimum, a new Edition of Rust is only released once every three years. We've had the 2015 Edition, the 2018 Edition, the 2021 Edition, and soon, the 2024 Edition. And we could use your help!

A call for proposals for the Rust 2024 Edition

We know how much you love Rust, but let's be honest, no language is perfect, and Rust is no exception. So if you've got ideas for how Rust could be better if only that pesky stability guarantee weren't around, now's the time to share! Also note that potential Edition-related changes aren't just limited to the language itself: we'll also consider changes to both Cargo and rustfmt as well.

Please keep in mind that the following criteria determine the sort of changes we're looking for:

  1. A change must be possible to implement without violating the strict properties listed in the prior section. Specifically, the ability of crates to have cross-Edition dependencies imposes restrictions on changes that would take effect across crate boundaries, e.g. the signatures of public APIs. However, we will occasionally discover that an Edition-related changethat was once thought to be impossible actually turns out to be feasible, so hope is not lost if you're not sure if your idea meets this standard; propose it just to be safe!
  2. We strive to ensure that nearly all Edition-related changes can be applied to existing codebases automatically (via tools like cargo fix), in order to make upgrading to a new Edition as painless as possible.
  3. Even if an Edition could make any given change, that doesn't mean that it should. We're not looking for hugely-invasive changes or things that would fundamentally alter the character of the language. Please focus your proposals on things like fixing obvious bugs, changing annoying behavior, unblocking future feature development, and making the language easier and more consistent.

To spark your imagination, here's a real-world example. In the 2015 and 2018 Editions, iterating over a fixed-length array via [foo].into_iter()will yield references to the iterated elements; this is is surprising because, on other types, calling .into_iter() produces an iteratorthat yields owned values rather than references. This limitation existed because older versions of Rust lacked the ability to implement traits for all possible fixed-length arrays in a generic way. Once Rust finally became able to express this,all Editions at last gained the ability to iterate over owned values in fixed-length arrays; however, in the specific case of [foo].into_iter(), altering the existing behavior would have broken lots of code in the wild. Therefore, we used the 2021 Edition to fix this inconsistency for the specific case of [foo].into_iter(), allowing us to address this long-standing issue while preserving Rust's stability guarantees.

How to contribute

Just like other changes to Rust, Edition-related proposals follow the RFC process, as documented in the Rust RFCs repository. Please follow the process documented there, and please consider publicizing a draft of your RFC to collect preliminary feedback before officially submitting it, in order to expedite the RFC process once you've filed it for real! (And in addition to the venues mentioned in the prior link, please feel free to announce your pre-RFC to our Zulip channel.)

Please file your RFCs as soon as possible! Our goal is to release the 2024 Edition in the second half of 2024, which means we would like to get everything implemented(not only the features themselves, but also all the Edition-related migration tooling) by the end of May, which means that RFCs should be accepted by the end of February. And since RFCs take time to discuss and consider, we strongly encourage you to have your RFC filed by the end of December, or the first week of January at the very latest.

We hope to have periodic updates on the ongoing development of the 2024 Edition. In the meantime, if you have any questions or if you would like to help us make the new Edition a reality, we invite you to come chat in the #edition channel in the Rust Zulip.

Continue Reading…

Rust Blog

Cargo cache cleaning

Cargo has recently gained an unstable feature on the nightly channel (starting with nightly-2023-11-17) to perform automatic cleaning of cache content within Cargo's home directory. This post includes:

In short, we are asking people who use the nightly channel to enable this feature and report any issues you encounter on the Cargo issue tracker. To enable it, place the following in your Cargo config file (typically located in ~/.cargo/config.toml or %USERPROFILE%\.cargo\config.toml for Windows):

[unstable]
gc = true

Or set the CARGO_UNSTABLE_GC=true environment variable or use the -Zgc CLI flag to turn it on for individual commands.

We'd particularly like people who use unusual filesystems or environments to give it a try, since there are some parts of the implementation which are sensitive and need battle testing before we turn it on for everyone.

What is this feature?

Cargo keeps a variety of cached data within the Cargo home directory. This cache can grow unbounded and can get quite large (easily reaching many gigabytes). Community members have developed tools to manage this cache, such as cargo-cache, but cargo itself never exposed any ability to manage it.

This cache includes:

  • Registry index data, such as package dependency metadata from crates.io.
  • Compressed .crate files downloaded from a registry.
  • The uncompressed contents of those .crate files, which rustc uses to read the source and compile dependencies.
  • Clones of git repositories used by git dependencies.

The new garbage collection ("GC") feature adds tracking of this cache data so that cargo can automatically or manually remove unused files. It keeps an SQLite database which tracks the last time the various cache elements have been used. Every time you run a cargo command that reads or writes any of this cache data, it will update the database with a timestamp of when that data was last used.

What isn't yet included is cleaning of target directories, see Plan for the future.

Automatic cleaning

When you run cargo, once a day it will inspect the last-use cache tracker, and determine if any cache elements have not been used in a while. If they have not, then they will be automatically deleted. This happens with most commands that would normally perform significant work, like cargo build or cargo fetch.

The default is to delete data that can be locally recreated if it hasn't been used for 1 month, and to delete data that has to be re-downloaded after 3 months.

Automatic deletion is disabled if cargo is offline such as with --offline or --frozen to avoid deleting artifacts that may need to be used if you are offline for a long period of time.

The initial implementation has exposed a variety of configuration knobs to control how automatic cleaning works. However, it is unlikely we will expose too many low-level details when it is stabilized, so this may change in the future (see issue #13061). See the Automatic garbage collection section for more details on this configuration.

Manual cleaning

If you want to manually delete data from the cache, several options have been added under the cargo clean gc subcommand. This subcommand can be used to perform the normal automatic daily cleaning, or to specify different options on which data to remove. There are several options for specifying the age of data to delete (such as --max-download-age=3days) or specifying the maximum size of the cache (such as --max-download-size=1GiB). See the Manual garbage collection section or run cargo clean gc --help for more details on which options are supported.

This CLI design is only preliminary, and we are looking at determining what the final design will look like when it is stabilized, see issue #13060.

What to watch out for

After enabling the gc feature, just go about your normal business of using cargo. You should be able to observe the SQLite database stored in your cargo home directory at ~/.cargo/.global-cache.

After the first time you use cargo, it will populate the database tracking all the data that already exists in your cargo home directory. Then, after 1 month, cargo should start deleting old data, and after 3 months will delete even more data.

The end result is that after that period of time you should start to notice the home directory using less space overall.

You can also try out the cargo clean gc command and explore some of its options if you want to try to manually delete some data.

If you run into problems, you can disable the gc feature and cargo should return to its previous behavior. Please let us know on the issue tracker if this happens.

Request for feedback

We'd like to hear from you about your experience using this feature. Some of the things we are interested in are:

  • Have you run into any bugs, errors, issues, or confusing problems? Please file an issue over at https://github.com/rust-lang/cargo/issues/.
  • The first time that you use cargo with GC enabled, is there an unreasonably long delay? Cargo may need to scan your existing cache data once to detect what already exists from previous versions.
  • Do you notice unreasonable delays when it performs automatic cleaning once a day?
  • Do you have use cases where you need to do cleaning based on the size of the cache? If so, please share them at #13062.
  • If you think you would make use of manually deleting cache data, what are your use cases for doing that? Sharing them on #13060 about the CLI interface might help guide us on the overall design.
  • Does the default of deleting 3 month old data seem like a good balance for your use cases?

Or if you would prefer to share your experiences on Zulip, head over to the #t-cargo stream.

Design considerations and implementation details

(These sections are only for the intently curious among you.)

The implementation of this feature had to consider several constraints to try to ensure that it works in nearly all environments, and doesn't introduce a negative experience for users.

Performance

One big focus was to make sure that the performance of each invocation of cargo is not significantly impacted. Cargo needs to potentially save a large chunk of data every time it runs. The performance impact will heavily depend on the number of dependencies and your filesystem. Preliminary testing shows the impact can be anywhere from 0 to about 50ms.

In order to minimize the performance impact of actually deleting files, the automatic GC runs only once a day. This is intended to balance keeping the cache clean without impacting the performance of daily use.

Locking

Another big focus is dealing with cache locking. Previously, cargo had a single lock on the package cache, which cargo would hold while downloading registry data and performing dependency resolution. When cargo is actually running rustc, it previously did not hold a lock under the assumption that existing cache data will not be modified.

However, now that cargo can modify or delete existing cache data, it needs to be careful to coordinate with anything that might be reading from the cache, such as if multiple cargo commands are run simultaneously. To handle this, cargo now has two separate locks, which are used together to provide three separate locking states. There is a shared read lock, which allows multiple builds to run in parallel and read from the cache. There is a write lock held while downloading registry data, which is independent of the read lock which allows concurrent builds to still run while new packages are downloaded. The third state is a write lock that prevents either of the two previous locks from being held, and ensures exclusive access while cleaning the cache.

Versions of cargo before 1.75 don't know about the exclusive write lock. We are hoping that in practice it will be rare to concurrently run old and new cargo versions, and that it is unlikely that the automatic GC will need to delete data that is concurrently in use by an older version.

Error handling and filesystems

Because we do not want problems with GC from disrupting users, the implementation silently skips the GC if it is unable to acquire an exclusive lock on the package cache. Similarly, when cargo saves the timestamp data on every command, it will silently ignore errors if it is unable to open the database, such as if it is on a read-only filesystem, or it is unable to acquire a write lock. This may result in the last-use timestamps becoming stale, but hopefully this should not impact most usage scenarios. For locking, we are paying special attention to scenarios such as Docker container mounts and network filesystems with questionable locking support.

Backwards compatibility

Since the cache is used by any version of cargo, we have to pay close attention to forwards and backwards compatibility. We benefit from SQLite's particularly stable on-disk data format which has been stable since 2004. Cargo has support to do schema migrations within the database that stay backwards compatible.

Plan for the future

A major aspect of this endeavor is to gain experience with using SQLite in a wide variety of environments, with a plan to extend its usage in several other parts of cargo.

Registry index metadata

One place where we are looking to introduce SQLite is for the registry index cache. When cargo downloads registry index data, it stores it in a custom-designed binary file format to improve lookup performance. However, this index cache uses many small files, which may not perform well on some filesystems.

Additionally, the index cache grows without bound. Currently the automatic cache cleaning will only delete an entire index cache if the index itself hasn't been used, which is rarely the case for crates.io. We may also need to consider finer-grained timestamp tracking or some mechanism to periodically purge this data.

Target directory change tracking and cleaning

Another place we are looking to introduce SQLite is for managing the target directory. In cargo's target directory, cargo keeps track of information about each crate that has been built with what is called a fingerprint. These fingerprints help cargo know if it needs to recompile something. Each artifact is tracked with a set of 4 files, using a mixture of custom formats.

We are looking to replace this system with SQLite which will hopefully bring about several improvements. A major focus will be to provide cleaning of stale data in the target directory, which tends to use substantial amount of disk space. Additionally we are looking to implement other improvements, such as more accurate fingerprint tracking, provide information about why cargo thinks something needed to be recompiled, and to hopefully improve performance. This will be important for the script feature, which uses a global cache for build artifacts, and the future implementation of a globally-shared build cache.

Continue Reading…

Rust Blog

A Call for Proposals for the Rust 2024 Edition

The year 2024 is soon to be upon us, and as long-time Rust aficionados know, that means that a new Edition of Rust is on the horizon!

What is an Edition?

You may be aware that a new version of Rust is released every six weeks. New versions of the language can both add things as well as change things, but only in backwards-compatible ways, according to Rust's1.0 stability guarantee.

But does that mean that Rust can never make backwards-incompatible changes? Not quite! This is what an Edition is: Rust's mechanism for introducing backwards-incompatible changes in a backwards-compatible way. If that sounds like a contradiction, there are three key properties of Editions that preserve the stability guarantee:

  1. Editions are opt-in; crates only receive breaking changes if its authors explicitly ask for them.
  2. Crates that use older editions never get left behind; a crate written for the original Rust 2015 Edition is still supported by every Rust release, and can still make use of all the new goodies that accompany each new version, e.g. new library APIs, compiler optimizations, etc.
  3. An Edition never splits the library ecosystem; crates using new Editions can depend on crates using old Editions (and vice-versa!), so nobody ever has to worry about Edition-related incompatibility.

In order to keep churn to a minimum, a new Edition of Rust is only released once every three years. We've had the 2015 Edition, the 2018 Edition, the 2021 Edition, and soon, the 2024 Edition. And we could use your help!

A call for proposals for the Rust 2024 Edition

We know how much you love Rust, but let's be honest, no language is perfect, and Rust is no exception. So if you've got ideas for how Rust could be better if only that pesky stability guarantee weren't around, now's the time to share! Also note that potential Edition-related changes aren't just limited to the language itself: we'll also consider changes to both Cargo and rustfmt as well.

Please keep in mind that the following criteria determine the sort of changes we're looking for:

  1. A change must be possible to implement without violating the strict properties listed in the prior section. Specifically, the ability of crates to have cross-Edition dependencies imposes restrictions on changes that would take effect across crate boundaries, e.g. the signatures of public APIs. However, we will occasionally discover that an Edition-related changethat was once thought to be impossible actually turns out to be feasible, so hope is not lost if you're not sure if your idea meets this standard; propose it just to be safe!
  2. We strive to ensure that nearly all Edition-related changes can be applied to existing codebases automatically (via tools like cargo fix), in order to make upgrading to a new Edition as painless as possible.
  3. Even if an Edition could make any given change, that doesn't mean that it should. We're not looking for hugely-invasive changes or things that would fundamentally alter the character of the language. Please focus your proposals on things like fixing obvious bugs, changing annoying behavior, unblocking future feature development, and making the language easier and more consistent.

To spark your imagination, here's a real-world example. In the 2015 and 2018 Editions, iterating over a fixed-length array via [foo].into_iter()will yield references to the iterated elements; this is is surprising because, on other types, calling .into_iter() produces an iteratorthat yields owned values rather than references. This limitation existed because older versions of Rust lacked the ability to implement traits for all possible fixed-length arrays in a generic way. Once Rust finally became able to express this,all Editions at last gained the ability to iterate over owned values in fixed-length arrays; however, in the specific case of [foo].into_iter(), altering the existing behavior would have broken lots of code in the wild. Therefore, we used the 2021 Edition to fix this inconsistency for the specific case of [foo].into_iter(), allowing us to address this long-standing issue while preserving Rust's stability guarantees.

How to contribute

Just like other changes to Rust, Edition-related proposals follow the RFC process, as documented in the Rust RFCs repository. Please follow the process documented there, and please consider publicizing a draft of your RFC to collect preliminary feedback before officially submitting it, in order to expedite the RFC process once you've filed it for real! (And in addition to the venues mentioned in the prior link, please feel free to announce your pre-RFC to our Zulip channel.)

Please file your RFCs as soon as possible! Our goal is to release the 2024 Edition in the second half of 2024, which means we would like to get everything implemented(not only the features themselves, but also all the Edition-related migration tooling) by the end of May, which means that RFCs should be accepted by the end of February. And since RFCs take time to discuss and consider, we strongly encourage you to have your RFC filed by the end of December, or the first week of January at the very latest.

We hope to have periodic updates on the ongoing development of the 2024 Edition. In the meantime, if you have any questions or if you would like to help us make the new Edition a reality, we invite you to come chat in the #edition channel in the Rust Zulip.

Continue Reading…

Rust Blog

Announcing Rust 1.74.1

The Rust team has published a new point release of Rust, 1.74.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.74.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.74.1

1.74.1 resolves a few regressions introduced in 1.74.0:

Contributors to 1.74.1

Many people came together to create Rust 1.74.1. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

Announcing Rust 1.74.0

The Rust team is happy to announce a new version of Rust, 1.74.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.74.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.74.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.74.0 stable

Lint configuration through Cargo

As proposed in RFC 3389, the Cargo.toml manifest now supports a [lints] table to configure the reporting level (forbid, deny, warn, allow) for lints from the compiler and other tools. So rather than setting RUSTFLAGS with -F/-D/-W/-A, which would affect the entire build, or using crate-level attributes like:

#![forbid(unsafe_code)]
#![deny(clippy::enum_glob_use)]

You can now write those in your package manifest for Cargo to handle:

[lints.rust]
unsafe_code = "forbid"

[lints.clippy]
enum_glob_use = "deny"

These can also be configured in a [workspace.lints] table, then inherited by[lints] workspace = true like many other workspace settings. Cargo will also track changes to these settings when deciding which crates need to be rebuilt.

For more information, see the lints and workspace.lints sections of the Cargo reference manual.

Cargo Registry Authentication

Two more related Cargo features are included in this release: credential providers and authenticated private registries.

Credential providers allow configuration of how Cargo gets credentials for a registry. Built-in providers are included for OS-specific secure secret storage on Linux, macOS, and Windows. Additionally, custom providers can be written to support arbitrary methods of storing or generating tokens. Using a secure credential provider reduces risk of registry tokens leaking.

Registries can now optionally require authentication for all operations, not just publishing. This enables private Cargo registries to offer more secure hosting of crates. Use of private registries requires the configuration of a credential provider.

For further information, see theCargo docs.

Projections in opaque return types

If you have ever received the error that a "return type cannot contain a projection or Self that references lifetimes from a parent scope," you may now rest easy! The compiler now allows mentioning Self and associated types in opaque return types, like async fn and -> impl Trait. This is the kind of feature that gets Rust closer to how you might just_expect_ it to work, even if you have no idea about jargon like "projection".

This functionality had an unstable feature gate because its implementation originally didn't properly deal with captured lifetimes, and once that was fixed it was given time to make sure it was sound. For more technical details, see the stabilization pull request, which describes the following examples that are all now allowed:

struct Wrapper<'a, T>(&'a T);

// Opaque return types that mention `Self`:
impl Wrapper<'_, ()> {
    async fn async_fn() -> Self { /* ... */ }
    fn impl_trait() -> impl Iterator<Item = Self> { /* ... */ }
}

trait Trait<'a> {
    type Assoc;
    fn new() -> Self::Assoc;
}
impl Trait<'_> for () {
    type Assoc = ();
    fn new() {}
}

// Opaque return types that mention an associated type:
impl<'a, T: Trait<'a>> Wrapper<'a, T> {
    async fn mk_assoc() -> T::Assoc { /* ... */ }
    fn a_few_assocs() -> impl Iterator<Item = T::Assoc> { /* ... */ }
}

Stabilized APIs

These APIs are now stable in const contexts:

Compatibility notes

  • As previously announced, Rust 1.74 has increased its requirements on Apple platforms. The minimum versions are now:
    • macOS: 10.12 Sierra (First released 2016)
    • iOS: 10 (First released 2016)
    • tvOS: 10 (First released 2016)

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.74.0

Many people came together to create Rust 1.74.0. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

Faster compilation with the parallel front-end in nightly

The Rust compiler's front-end can now use parallel execution to significantly reduce compile times. To try it, run the nightly compiler with the -Z threads=8 option. This feature is currently experimental, and we aim to ship it in the stable compiler in 2024.

Keep reading to learn why a parallel front-end is needed and how it works, or just skip ahead to the How to use itsection.

Compile times and parallelism

Rust compile times are a perennial concern. The Compiler Performance Working Grouphas continually improved compiler performance for several years. For example, in the first 10 months of 2023, there were mean reductions in compile time of13%, in peak memory use of15%, and in binary size of7%, as measured by our performance suite.

However, at this point the compiler has been heavily optimized and new improvements are hard to find. There is no low-hanging fruit remaining.

But there is one piece of large but high-hanging fruit: parallelism. Current Rust compiler users benefit from two kinds of parallelism, and the newly parallel front-end adds a third kind.

Existing interprocess parallelism

When you compile a Rust program, Cargo launches multiple rustc processes, compiling multiple crates in parallel. This works well. Try compiling a large Rust program with the -j1 flag to disable this parallelization and it will take a lot longer than normal.

You can visualise this parallelism if you build with Cargo's--timings flag, which produces a chart showing how the crates are compiled. The following image shows the timeline when building ripgrep on a machine with 28 virtual cores.

cargo build --timings output when compiling ripgrep

There are 60 horizontal lines, each one representing a distinct process. Their durations range from a fraction of a second to multiple seconds. Most of them are rustc, and the few orange ones are build scripts. The first twenty run all start at the same time. This is possible because there are no dependencies between the relevant crates. But further down the graph, parallelism reduces as crate dependencies increase. Although the compiler can overlap compilation of dependent crates somewhat thanks to a feature called pipelined compilation, there is much less parallel execution happening towards the end of compilation, and this is typical for large Rust programs. Interprocess parallelism is not enough to take full advantage of many cores. For more speed, we need parallelism within each process.

Existing intraprocess parallelism: the back-end

The compiler is split into two halves: the front-end and the back-end.

The front-end does many things, including parsing, type checking, and borrow checking. Until this week, it could not use parallel execution.

The back-end performs code generation. It generates code in chunks called "codegen units" and then LLVM processes these in parallel. This is a form of coarse-grained parallelism.

We can visualize the difference between the serial front-end and the parallel back-end. The following image shows the output of a profiler calledSamply measuring rustc as it does a release build of the final crate in Cargo. The image is superimposed with markers that indicate front-end and back-end execution.

Samply output when compiling Cargo, serial

Each horizontal line represents a thread. The main thread is labelled "rustc" and is shown at the bottom. It is busy for most of the execution. The other 16 threads are LLVM threads, labelled "opt cgu.00" through to "opt cgu.15". There are 16 threads because 16 is the default number of codegen units for a release build.

There are several things worth noting.

  • Front-end execution takes 10.2 seconds.
  • Back-end execution occurs takes 6.2 seconds, and the LLVM threads are running for 5.9 seconds of that.
  • The parallel code generation is highly effective. Imagine if all those LLVM executed one after another!
  • Even though there are 16 LLVM threads, at no point are all 16 executing at the same time, despite this being run on a machine with 28 cores. (The peak is 14 or 15.) This is because the main thread translates its internal code representation (MIR) to LLVM's code representation (LLVM IR) in serial. This takes a brief period for each codegen unit, and explains the staircase shape on the left-hand side of the code generation threads. There is some room for improvement here.
  • The front-end is entirely serial. There is a lot of room for improvement here.

New intraprocess parallelism: the front-end

The front-end is now capable of parallel execution. It usesRayon to perform compilation tasks using fine-grained parallelism. Many data structures are synchronized by mutexes and read-write locks, atomic types are used where appropriate, and many front-end operations are made parallel. The addition of parallelism was done by modifying a relatively small number of key points in the code. The vast majority of the front-end code did not need to be changed.

When the parallel front-end is enabled and configured to use eight threads, we get the following Samply profile when compiling the same example as before.

Samply output when compiling Cargo, parallel

Again, there are several things worth nothing.

  • Front-end execution takes 5.9 seconds (down from 10.2 seconds).
  • Back-end execution takes 5.3 seconds (down from 6.2 seconds), and the LLVM threads are running for 4.9 seconds of that (down from 5.9 seconds).
  • There are seven additional threads labelled "rustc" operating in the front-end. The reduced front-end time shows they are reasonably effective, but the thread utilization is patchy, with the eight threads all having periods of inactivity. There is room for significant improvement here.
  • Eight of the LLVM threads start at the same time. This is because the eight "rustc" threads create the LLVM IR for eight codegen units in parallel. (For seven of those threads that is the only work they do in the back-end.) After that, the staircase effect returns because only one "rustc" thread does LLVM IR generation while seven or more LLVM threads are active. If the number of threads used by the front-end was changed to 16 the staircase shape would disappear entirely, though in this case the final execution time would barely change.

Putting it all together

Rust compilation has long benefited from interprocess parallelism, via Cargo, and from intraprocess parallelism in the back-end. It can now also benefit from intraprocess parallelism in the front-end.

You might wonder how interprocess parallelism and intraprocess parallelism interact. If we have 20 parallel rustc invocations and each one can have up to 16 threads running, could we end up with hundreds of threads on a machine with only tens of cores, resulting in inefficient execution as the OS tries its best to schedule them?

Fortunately no. The compiler uses the jobserver protocolto limit the number of threads it creates. If a lot of interprocess parallelism is occuring, intraprocess parallelism will be limited appropriately, and the number of threads will not exceed the number of cores.

How to use it

The nightly compiler is now shipping with the parallel front-end enabled. However, by default it runs in single-threaded mode and won't reduce compile times.

Keen users can opt into multi-threaded mode with the -Z threads option. For example:

$ RUSTFLAGS="-Z threads=8" cargo build --release

Alternatively, to opt in from aconfig.toml file (for one or more projects), add these lines:

[build]
rustflags = ["-Z", "threads=8"]

It may be surprising that single-threaded mode is the default. Why parallelize the front-end and then run it in single-threaded mode? The answer is simple: caution. This is a big change! The parallel front-end has a lot of new code. Single-threaded mode exercises most of the new code, but excludes the possibility of threading bugs such as deadlocks that can affect multi-threaded mode. Even in Rust, parallel programs are harder to write correctly than serial programs. For this reason the parallel front-end also won't be shipped in beta or stable releases for some time.

Performance effects

When the parallel front-end is run in single-threaded mode, compilation times are typically 0% to 2% slower than with the serial front-end. This should be barely noticeable.

When the parallel front-end is run in multi-threaded mode with -Z threads=8, our measurements on real-world code show that compile times can be reduced by up to 50%, though the effects vary widely and depend on the characteristics of the code and its build configuration. For example, dev builds are likely to see bigger improvements than release builds because release builds usually spend more time doing optimizations in the back-end. A small number of cases compile more slowly in multi-threaded mode than single-threaded mode. These are mostly tiny programs that already compile quickly.

We recommend eight threads because this is the configuration we have tested the most and it is known to give good results. Values lower than eight will see smaller benefits. Values greater than eight will give diminishing returns and may even give worse performance.

If a 50% improvement seems low when going from one to eight threads, recall from the explanation above that the front-end only accounts for part of compile times, and the back-end is already parallel. You can't beat Amdahl's Law.

Memory usage can increase significantly in multi-threaded mode. We have seen increases of up to 35%. This is unsurprising given that various parts of compilation, each of which requires a certain amount of memory, are now executing in parallel.

Correctness

Reliability in single-threaded mode should be high.

In multi-threaded mode there are some known bugs, including deadlocks. If compilation hangs, you have probably hit one of them.

Feedback

If you have any problems with the parallel front-end, please check the issues marked with the "WG-compiler-parallel" label. If your problem does not match any of the existing issues, please file a new issue.

For more general feedback, please start a discussion on the wg-parallel-rustc Zulip channel. We are particularly interested to hear the performance effects on the code you care about.

Future work

We are working to improve the performance of the parallel front-end. As the graphs above showed, there is room to improve the utilization of the threads in the front-end. We are also ironing out the remaining bugs in multi-threaded mode.

We aim to stabilize the -Z threads option and ship the parallel front-end running by default in multi-threaded mode on stable releases in 2024.

Acknowledgments

The parallel front-end has been under development for a long time. It was started by @Zoxc, who also did most of the work for several years. After a period of inactivity, the project was revived this year by @SparrowLii, who led the effort to get it shipped. Other members of the Parallel Rustc Working Group have also been involved with reviews and other activities. Many thanks to everyone involved.

Continue Reading…

Rust Blog

crates.io: Dropping support for non-canonical downloads

TL;DR

  • We want to improve the reliability and performance of crate downloads.
  • "Non-canonical downloads" (that use URLs containing hyphens or underscores where the crate published uses the opposite) are blocking these plans.
  • On 2023-11-20 support for "non-canonical downloads" will be disabled.
  • cargo users are unaffected.

What are "non-canonical downloads"?

The "non-canonical downloads" feature allows everyone to download the serde_derive crate from https://crates.io/api/v1/crates/serde%5Fderive/1.0.189/download, but also from https://crates.io/api/v1/crates/SERDE-derive/1.0.189/download, where the underscore was replaced with a hyphen (crates.io normalizes underscores and hyphens to be the same for uniqueness purposes, so it isn't possible to publish a crate named serde-derive because serde_derive exists) and parts of the crate name are using uppercase characters. The same also works vice versa, if the canonical crate name uses hyphens and the download URL uses underscores instead. It even works with any other combination for crates that have multiple such characters (please don't mix them…!).

Why remove it?

Supporting such non-canonical download requests means that the crates.io server needs to perform a database lookup for every download request to figure out the canonical crate name. The canonical crate name is then used to construct a download URL and the client is HTTP-redirected to that URL.

While we have introduced a caching layer some time ago to address some of the performance concerns, having all download requests go through our backend servers has still started to become problematic and at the current rate of growth will not become any easier in the future.

Having to support "non-canonical downloads" however prevents us from using CDNs directly for the download requests, so if we can remove support for non-canonical download requests, it will unlock significant performance and reliability gains.

Who is using "non-canonical downloads"?

cargo always uses the canonical crate name from the package index to construct the corresponding download URLs. If support was removed for this on the crates.io side then cargo would still work exactly the same as before.

Looking at the crates.io request logs, the following user-agents are currently relying on "non-canonical downloads" support:

  • cargo-binstall/1.1.2
  • Faraday v0.17.6
  • Go-http-client/2.0
  • GNU Guile
  • python-requests/2.31.0

Three of these are just generic HTTP client libraries. GNU Guile is apparently a programming language, so most likely this is also a generic user-agent from a custom user program.

cargo-binstall is a tool enabling installation of binary artifacts of crates. The maintainer is already aware of the upcoming change and confirmed that more recent versions of cargo-binstall should not be affected by this change.

We recommend that any scripts relying on non-canonical downloads be adjusted to use the canonical names from the package index, the database dump, or the crates.io API instead. If you don't know which data source is best suited for you, we welcome you to take a look at the crates.io data access page.

What is the plan?

  1. Today: Announce the removal of support for non-canonical downloads on the main Rust blog.
  2. 2023-11-20: Disable support for non-canonical downloads and return a migration error message instead, to alert remaining users of this feature of the need to migrate. This still needs to put load on the application to detect a request is using a non-canonical download URL.
  3. 2023-12-18: Return a regular 404 error instead of the migration error message, allowing us to get rid of (parts of) the database query.

Note that we will still need the database query for download counting purposes for now. We have plans to remove this requirement as well, but those efforts are blocked by us still supporting non-canonical downloads.

If you want to follow the progress on implementing these changes or if you have comments you can subscribe to the corresponding tracking issue. Related discussions are also happening on the crates.io Zulip stream.

Continue Reading…

Rust Blog

A tale of broken badges and 23,000 features

Around mid-October of 2023 the crates.io team was notified by one of our users that a shields.io badge for their crate stopped working. The issue reporter was kind enough to already debug the problem and figured out that the API request that shields.io sends to crates.io was most likely the problem. Here is a quote from the original issue:

This crate makes heavy use of feature flags which bloat the response payload of the API.

Apparently the API response for this specific crate had broken the 20 MB mark and shields.io wasn't particularly happy with this. Interestingly, this crate only had 9 versions published at this point in time. But how do you get to 20 MB with only 9 published versions?

As the quote above already mentions, this crate is using features… a lot of features… almost 23,000! 😱

What crate needs that many features? Well, this crate provides SVG icons for Rust-based web applications… and it uses one feature per icon so that the payload size of the final WebAssembly bundle stays small.

At first glance there should be nothing wrong with this. This seems like a reasonable thing to do from a crate author perspective and neither cargo, nor crates.io, were showing any warnings about this. Unfortunately, some of the internals are not too happy about such a high number of features…

The first problem that was already identified by the crate author: the API responses from crates.io are getting veeeery large. Adding to the problem is the fact that the crates.io API currently does not paginate the list of published versions. Changing this is obviously a breaking change, so our team had been a bit reluctant to change the behavior of the API in that regard, though this situation has shown that we will likely have to tackle this problem in the near future.

The next problem is that the index file for this crate is also getting large. With 9 published versions it already contains 11 MB of data. And just like the crates.io API, there is currently no pagination built into the package index file format.

Now you may ask, why do the package index and cargo need to know about features? Well, the easy answer is: for dependency resolution. Features can enable optional dependencies, so when a dependency feature is used it might influence the dependency resolution. Our initial thought was that we could at least drop all empty feature declarations from the index file (e.g. foo = []), but the cargo team informed us that cargo relies on them being available there too, and so for backwards-compatibility reasons this is not an option.

On the bright side, most Rust users are on cargo versions these days that use the sparse package index by default, which only downloads index files for packages actually being used. In other words: only users of this icon crate need to pay the price for downloading all the metadata. On the flipside, this means users who are still using the git-based index are all paying for this one crate using 23,000 features.

So, where do we go from here? 🤔

While we believe that supporting such a high number of features is conceptually a valid request, with the current implementation details in crates.io and cargo we cannot support this. After analyzing all of these downstream effects from a single crate having that many features, we realized we need some form of restriction on crates.io to keep the system from falling apart.

Now comes the important part: on 2023-10-16 the crates.io team deployed a change limiting the number of features a crate can have to 300 for any new crates/versions being published.

… for now, or at least until we have found solutions for the above problems.

We are aware of a couple of crates that also have legitimate reasons for having more than 300 features, and we have granted them appropriate exceptions to this rule, but we would like to ask everyone to be mindful of these limitations of our current systems.

We also invite everyone to participate in finding solutions to the above problems. The best place to discuss ideas is the crates.io Zulip stream, and once an idea is a bit more fleshed out it will then be transformed into an RFC.

Finally, we would like to thank Charles Edward Gagnon for making us aware of this problem. We also want to reiterate that the author and their crate are not to blame for this. It is hard to know of these crates.io implementation details when developing crates, so if anything, the blame would be on us, the crates.io team, for not having limits on this earlier. Anyway, we have them now, and now you all know why! 👋

Continue Reading…

Rust Blog

Announcing the New Rust Project Directors

We are happy to announce that we have completed the process to elect new Project Directors.

The new Project Directors are:

They will join Ryan Levick and Mark Rousskov to make up the five members of the Rust Foundation Board of Directors who represent the Rust Project.

The board is made up of Project Directors, who come from and represent the Rust Project, and Member Directors, who represent the corporate members of the Rust Foundation.

Both of these director groups have equal voting power.

We look forward to working with and being represented by this new group of project directors.

We were fortunate to have a number of excellent candidates and this was a difficult decision. We wish to express our gratitude to all of the candidates who were considered for this role! We also extend our thanks to the project as a whole who participated by nominating candidates and providing additional feedback once the nominees were published. Finally, we want to share our appreciation for the Project Director Elections Subcommittee for working to design and facilitate running this election process.

This was a challenging decision for a number of reasons.

This was also our first time doing this process and we learned a lot to use to improve it going forward. The Project Director Elections Subcommittee will be following up with a retrospective outlining how well we achieved our goals with this process and making suggestions for future elections. We are expecting another election next year to start a rotating cadence of 2-year terms. Project governance is about iterating and refining over time.

Once again, we thank all who were involved in this process and we are excited to welcome our new Project Directors.

Continue Reading…

Rust Blog

Announcing Rust 1.73.0

The Rust team is happy to announce a new version of Rust, 1.73.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.73.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.73.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.73.0 stable

Cleaner panic messages

The output produced by the default panic handler has been changed to put the panic message on its own line instead of wrapping it in quotes. This can make panic messages easier to read, as shown in this example:

fn main() {
    let file = "ferris.txt";
    panic!("oh no! {file:?} not found!");
}

Output before Rust 1.73:

thread 'main' panicked at 'oh no! "ferris.txt" not found!', src/main.rs:3:5

Output starting in Rust 1.73:

thread 'main' panicked at src/main.rs:3:5:
oh no! "ferris.txt" not found!

This is especially useful when the message is long, contains nested quotes, or spans multiple lines.

Additionally, the panic messages produced by assert_eq and assert_ne have been modified, moving the custom message (the third argument) and removing some unnecessary punctuation, as shown below:

fn main() {
    assert_eq!("🦀", "🐟", "ferris is not a fish");
}

Output before Rust 1.73:

thread 'main' panicked at 'assertion failed: `(left == right)`
 left: `"🦀"`,
right: `"🐟"`: ferris is not a fish', src/main.rs:2:5

Output starting in Rust 1.73:

thread 'main' panicked at src/main.rs:2:5:
assertion `left == right` failed: ferris is not a fish
 left: "🦀"
right: "🐟"

Thread local initialization

As proposed in RFC 3184, LocalKey<Cell<T>> and LocalKey<RefCell<T>> can now be directly manipulated with get(), set(), take(), and replace() methods, rather than jumping through a with(|inner| ...) closure as needed for general LocalKey work. LocalKey<T> is the type of thread_local! statics.

The new methods make common code more concise and avoid running the extra initialization code for the default value specified in thread_local! for new threads.

thread_local! {
    static THINGS: Cell<Vec<i32>> = Cell::new(Vec::new());
}

fn f() {
    // before:
    THINGS.with(|i| i.set(vec![1, 2, 3]));
    // now:
    THINGS.set(vec![1, 2, 3]);

    // ...

    // before:
    let v = THINGS.with(|i| i.take());
    // now:
    let v: Vec<i32> = THINGS.take();
}

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.73.0

Many people came together to create Rust 1.73.0. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

Increasing the minimum supported Apple platform versions

As of Rust 1.74 (to be released on November 16th, 2023), the minimum version of Apple's platforms (iOS, macOS, and tvOS) that the Rust toolchain supports will be increased to newer minimums. These changes affect both the Rust compiler itself (rustc), other host tooling, and most importantly, the standard library and any binaries produced that use it. With these changes in place, any binaries produced will stop loading on older versions or exhibit other, unspecified, behavior.

The new minimum versions are now:

  • macOS: 10.12 Sierra (First released 2016)
  • iOS: 10 (First released 2016)
  • tvOS: 10 (First released 2016)

If your application does not target or support macOS 10.7-10.11 or iOS 7-9 already these changes most likely do not affect you.

Affected targets

The following contains each affected target, and the comprehensive effects on it:

  • x86_64-apple-darwin (Minimum OS raised)
  • aarch64-apple-ios (Minimum OS raised)
  • aarch64-apple-ios-sim (Minimum iOS and macOS version raised.)
  • x86_64-apple-ios (Minimum iOS and macOS version raised. This is also a simulator target.)
  • aarch64-apple-tvos (Minimum OS raised)
  • armv7-apple-ios (Target removed. The oldest iOS 10-compatible device uses ARMv7s.)
  • armv7s-apple-ios (Minimum OS raised)
  • i386-apple-ios (Minimum OS raised)
  • i686-apple-darwin (Minimum OS raised)
  • x86_64-apple-tvos (Minimum tvOS and macOS version raised. This is also a simulator target.)

From these changes, only one target has been removed entirely: armv7-apple-ios. It was a tier 3 target.

Note that Mac Catalyst and M1/M2 (aarch64) Mac targets are not affected, as their minimum OS version already has a higher baseline. Refer to the Platform Support Guide for more information.

Affected systems

These changes remove support for multiple older mobile devices (iDevices) and many more Mac systems. Thanks to @madsmtm for compiling the list.

As of this update, the following device models are no longer supported by the latest Rust toolchain:

iOS

  • iPhone 4S (Released in 2011)
  • iPad 2 (Released in 2011)
  • iPad, 3rd generation (Released in 2012)
  • iPad Mini, 1st generation (Released in 2012)
  • iPod Touch, 5th generation (Released in 2012)

macOS

A total of 27 Mac system models, released between 2007 and 2009, are no longer supported.

The affected systems are not comprehensively listed here, but external resources exist which contain lists of the exact models. They can be found from Apple and Yama-Mac, for example.

tvOS

The third generation AppleTV (released 2012-2013) is no longer supported.

Why are the requirements being changed?

Prior to now, Rust claimed support for very old Apple OS versions, but many never even received passive testing or support. This is a rough place to be for a toolchain, as it hinders opportunities for improvement in exchange for a support level many people, or everyone, will never utilize. For Apple's mobile platforms, many of the old versions are now even unable to receive new software due to App Store publishing restrictions.

Additionally, the past two years have clearly indicated that Apple, which has tight control over toolchains for these targets, is making it difficult-to-impossible to support them anymore. As of XCode 14, last year's toolchain release, building for many old OS versions became unsupported. XCode 15 continues this trend. After enough time, continuing to use an older toolchain can even lead to breaking build issues for others.

We want Rust to be a first-class option for developing software for and on Apple's platforms, but to continue this goal we have to set an easier, and more realistic compatibility baseline. The new requirements were determined after surveying what Apple and third-party statistics are available to us and picking a middle ground that balances compatibility with Rusts's needs and limitations.

Do I need to do anything?

If you or an application you develop are affected by this change, there are different options which may be helpful:

  • If possible, raise your minimum supported OS versions. All OS versions discussed in above have no support from the vendor. Not even security updates.
  • If you are running the Rust compiler or other host tools that were previously supported, consider cross-compiling from a newer host instead. You may also no longer be able to depend on the Rust standard library.
  • If none of these options work, you may need to freeze the version of the Rust toolchain your project builds with. Alternatively, you may be able to maintain a custom toolchain that supports your requirements for any sub-component of it.

If your project does not directly support a specific version, but instead depends on a default previously used by Rust, there are some steps you can take to help improve. For example, a number of crates in the ecosystem have hardcoded Rust's default support versions since they haven't changed for a long time:

  • If you use the cc crate to include build languages into your project, a future update will handle this transparently.
  • If you need a minimum OS version for anything else, crates should query the new rustc --print deployment-target option for a default, or user-set, value on toolchains using Rust 1.71 or newer going forward. Hardcoded defaults should only be used for older toolchains where this is unavailable.

Continue Reading…

Rust Blog

crates.io Policy Update RFC

Around the end of July the crates.io team opened an RFC to update the current crates.io usage policies. This policy update addresses operational concerns of the crates.io community service that have arisen since the last significant policy update in 2017, particularly related to name squatting and spam. The RFC has caused considerable discussion, and most of the suggested improvements have since been integrated into the proposal.

At the last team meeting the crates.io team decided to move the RFC forward and start the final comment period process.

We have been made aware by a couple of community members though that the RFC might not have been visible enough in the Rust community. We hope that this blog post changes that.

We invite you all to review the RFC and let us know if there are still any major concerns with these proposed policies.

Here is a quick TL;DR:

  • The current policies are quite vague on a couple of topics. The new policies are more explicit.
  • Reserving names is still allowed, but only to a certain degree and if you have a good reason for it.
  • The crates.io team will try to contact crate owners before taking any actions.

Finally, if you have any comments, please open threads on the RFC diff, instead of using the main comment box, to keep the discussion more structured. Thank you!

Continue Reading…

Rust Blog

Announcing Rust 1.72.1

The Rust team has published a new point release of Rust, 1.72.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.72.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.72.1

1.72.1 resolves a few regressions introduced in 1.72.0:

Contributors to 1.72.1

Many people came together to create Rust 1.72.1. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

Electing New Project Directors

Today we are launching the process to elect new Project Directors to the Rust Foundation Board of Directors. As we begin the process, we wanted to spend some time explaining the goals and procedures we will follow. We will summarize everything here, but if you would like to you can read the official process documentation.

We ask all project members to begin working with their Leadership Council representative to nominate potential Project Directors. See the Candidate Gathering section for more details. Nominations are due by September 15, 2023.

What are Project Directors?

The Rust Foundation Board of Directors has five seats reserved for Project Directors. These Project Directors serve as representatives of the Rust project itself on the Board. Like all Directors, the Project Directors are elected by the entity they represent, which in the case of the Rust Project means they are elected by the Rust Leadership Council. Project Directors serve for a term of two years and will have staggered terms. This year we will appoint two new directors and next year we will appoint three new directors.

The current project directors are Jane Losare-Lusby, Josh Stone, Mark Rousskov, Ryan Levick and Tyler Mandry. This year, Jane Losare-Lusby and Josh Stone will be rotating out of their roles as Project Directors, so the current elections are to fill their seats. We are grateful for the work the Jane and Josh have put in during their terms as Project Directors!

We want to make sure the Project Directors can effectively represent the project as a whole, so we are soliciting input from the whole project. The elections process will go through two phases: Candidate Gathering and Election. Read on for more detail about how these work.

Candidate Gathering

The first phase is beginning right now. In this phase, we are inviting the members of all of the top level Rust teams and their subteams to nominate people who will make good project directors. The goal is to bubble these up to the Council through each of the top-level teams. You should be hearing from your Council Representative soon with more details, but if not, feel free to reach out to them directly.

Each team is encouraged to suggest candidates. Since we are electing two new directors, it would be ideal for teams to nominate at least two candidates. Nominees can be anyone in the project and do not have to be a member of the team who nominates them.

The candidate gathering process will be open until September 15, at which point each team's Council Representative will share their team's nominations and reasoning with the whole Leadership Council. At this point, the Council will confirm with each of the nominees that they are willing to accept the nomination and fill the role of Project Director. Then the Council will publish the set of candidates.

This then starts a ten day period where members of the Rust Project are invited to share feedback on the nominees with the Council. This feedback can include reasons why a nominee would make a good project director, or concerns the Council should be aware of.

The Council will announce the set of nominees by September 19 and the ten day feedback period will last until September 29. Once this time has passed, we will move on to the election phase.

Election

The Council will meet during the week of October 1 to complete the election process. In this meeting we will discuss each candidate and once we have done this the facilitator will propose a set of two of them to be the new Project Directors. The facilitator puts this to a vote, and if the Council unanimously agrees with the proposed pair of candidates then the process is completed. Otherwise, we will give another opportunity for council members to express their objections and we will continue with another proposal. This process repeats until we find two nominees who the Council can unanimously consent to. The Council will then confirm these nominees through an official vote.

Once this is done, we will announce the new Project Directors. In addition, we will contact each of the nominees, including those who were not elected, to tell them a little bit more about what we saw as their strengths and opportunities for growth to help them serve better in similar roles in the future.

Timeline

This process will continue through all of September and into October. Below are the key dates:

  • Candidate nominations due: September 15
  • Candidates published: September 19
  • Feedback period: September 19 - 29
  • Election meeting: Week of October 1

After the election meeting happens, the Rust Leadership Council will announce the results and the new Project Directors will assume their responsibilities.

Acknowledgements

A number of people have been involved in designing and launching this election process and we wish to extend a heartfelt thanks to all of them! We'd especially like to thank the members of the Project Director Election Proposal Committee: Jane Losare-Lusby, Eric Holk, and Ryan Levick. Additionally, many members of the Rust Community have provided feedback and thoughtful discussions that led to significant improvements to the process. We are grateful for all of your contributions.

Continue Reading…

Rust Blog

Change in Guidance on Committing Lockfiles

For years, the Cargo team has encouraged Rust developers tocommit their Cargo.lock file for packages with binaries but not libraries. We now recommend peopledo what is best for their project. To help people make a decision, we do include some considerations and suggest committing Cargo.lock as a starting point in their decision making. To align with that starting point, cargo new will no longer ignoreCargo.lock for libraries as of nightly-2023-08-24. Regardless of what decision projects make, we encourage regulartesting against their latest dependencies.

Background

The old guidelines ensured libraries tested their latest dependencies which helped us keep quality high within Rust's package ecosystem by ensuring issues, especially backwards compatibility issues, were quickly found and addressed. While this extra testing was not exhaustive, We believe it helped foster a culture of quality in this nascent ecosystem.

This hasn't been without its downsides though. This has removed an important piece of history from code bases, making bisecting to find the root cause of a bug harder for maintainers. For contributors, especially newer ones, this is another potential source of confusion and frustration from an unreliable CI whenever a dependency is yanked or a new release contains a bug.

Why the change

A lot as changed for Rust since the guideline was written. Rust has shifted from being a language for early adopters to being more mainstream, and we need to be mindful of the on-boarding experience of these new-to-Rust developers. Also with this wider adoption, it isn't always practical to assume everyone is using the latest Rust release and the community has been working through how to manage support for minimum-supported Rust versions (MSRV). Part of this is maintaining an instance of your dependency tree that can build with your MSRV. A lockfile is an appropriate way to pin versions for your project so you can validate your MSRV but we found people were instead putting upperbounds on their version requirements due to the strength of our prior guideline despitelikely being a worse solution.

The wider software development ecosystem has also changed a lot in the intervening time. CI has become easier to setup and maintain. We also have products likeDependabotandRenovate. This has opened up options besides having version control ignore Cargo.lock to test newer dependencies. Developers could have a scheduled job that first runs cargo update. They could also have bots regularly update their Cargo.lock in PRs, ensuring they pass CI before being merged.

Since there isn't a universal answer to these situations, we felt it was best to leave the choice to developers and give them information they need in making a decision. For feedback on this policy change, see rust-lang/cargo#8728. You can also reach out the the Cargo team more generally onZulip.

Continue Reading…

Rust Blog

Announcing Rust 1.72.0

The Rust team is happy to announce a new version of Rust, 1.72.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.72.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.72.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.72.0 stable

Rust reports potentially useful cfg-disabled items in errors

You can conditionally enable Rust code using cfg, such as to provide certain functions only with certain crate features, or only on particular platforms. Previously, items disabled in this way would be effectively invisible to the compiler. Now, though, the compiler will remember the name and cfg conditions of those items, so it can report (for example) if a function you tried to call is unavailable because you need to enable a crate feature.

   Compiling my-project v0.1.0 (/tmp/my-project)
error[E0432]: unresolved import `rustix::io_uring`
   --> src/main.rs:1:5
    |
1   | use rustix::io_uring;
    |     ^^^^^^^^^^^^^^^^ no `io_uring` in the root
    |
note: found an item that was configured out
   --> /home/username/.cargo/registry/src/index.crates.io-6f17d22bba15001f/rustix-0.38.8/src/lib.rs:213:9
    |
213 | pub mod io_uring;
    |         ^^^^^^^^
    = note: the item is gated behind the `io_uring` feature

For more information about this error, try `rustc --explain E0432`.
error: could not compile `my-project` (bin "my-project") due to previous error

Const evaluation time is now unlimited

To prevent user-provided const evaluation from getting into a compile-time infinite loop or otherwise taking unbounded time at compile time, Rust previously limited the maximum number of statements run as part of any given constant evaluation. However, especially creative Rust code could hit these limits and produce a compiler error. Worse, whether code hit the limit could vary wildly based on libraries invoked by the user; if a library you invoked split a statement into two within one of its functions, your code could then fail to compile.

Now, you can do an unlimited amount of const evaluation at compile time. To avoid having long compilations without feedback, the compiler will always emit a message after your compile-time code has been running for a while, and repeat that message after a period that doubles each time. By default, the compiler will also emit a deny-by-default lint (const_eval_long_running) after a large number of steps to catch infinite loops, but you canallow(const_eval_long_running) to permit especially long const evaluation.

Uplifted lints from Clippy

Several lints from Clippy have been pulled into rustc:

  • clippy::undropped_manually_drops to undropped_manually_drops (deny)
    • ManuallyDrop does not drop its inner value, so calling std::mem::drop on it does nothing. Instead, the lint will suggest ManuallyDrop::into_inner first, or you may use the unsafe ManuallyDrop::drop to run the destructor in-place. This lint is denied by default.
  • clippy::invalid_utf8_in_unchecked to invalid_from_utf8_unchecked (deny) and invalid_from_utf8 (warn)
    • The first checks for calls to std::str::from_utf8_unchecked and std::str::from_utf8_unchecked_mut with an invalid UTF-8 literal, which violates their safety pre-conditions, resulting in undefined behavior. This lint is denied by default.
    • The second checks for calls to std::str::from_utf8 and std::str::from_utf8_mut with an invalid UTF-8 literal, which will always return an error. This lint is a warning by default.
  • clippy::cmp_nan to invalid_nan_comparisons (warn)
    • This checks for comparisons with f32::NAN or f64::NAN as one of the operands. NaN does not compare meaningfully to anything – not even itself – so those comparisons are always false. This lint is a warning by default, and will suggest calling the is_nan() method instead.
  • clippy::cast_ref_to_mut to invalid_reference_casting (allow)
    • This checks for casts of &T to &mut T without using interior mutability, which is immediate undefined behavior, even if the reference is unused. This lint is currently allowed by default due to potential false positives, but it is planned to be denied by default in 1.73 after implementation improvements.

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Future Windows compatibility

In a future release we're planning to increase the minimum supported Windows version to 10. The accepted proposal in compiler MCP 651 is that Rust 1.75 will be the last to officially support Windows 7, 8, and 8.1. When Rust 1.76 is released in February 2024, only Windows 10 and later will be supported as tier-1 targets. This change will apply both as a host compiler and as a compilation target.

Contributors to 1.72.0

Many people came together to create Rust 1.72.0. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

2022 Annual Rust Survey Results

Hello, Rustaceans!

For the 6th year in a row, the Rust Project conducted a survey on the Rust programming language, with participation from project maintainers, contributors, and those generally interested in the future of Rust. This edition of the annual State of Rust Survey opened for submissions on December 5 and ran until December 22, 2022.

First, we'd like to thank you for your patience on these long delayed results. We hope to identify a more expedient and sustainable process going forward so that the results come out more quickly and have even more actionable insights for the community.

The goal of this survey is always to give our wider community a chance to express their opinions about the language we all love and help shape its future. We’re grateful to those of you who took the time to share your voice on the state of Rust last year.

Before diving into a few highlights, we would like to thank everyone who was involved in creating the State of Rust survey with special acknowledgment to the translators whose work allowed us to offer the survey in English, Simplified Chinese, Traditional Chinese, French, German, Japanese, Korean, Portuguese, Russian, Spanish, and Ukrainian.

Participation

In 2022, we had 9,433 total survey completions and an increased survey completion rate of 82% vs. 76% in 2021. While the goal is always total survey completion for all participants, the survey requires time, energy, and focus – we consider this figure quite high and were pleased by the increase.

We also saw a significant increase in the number of people viewing but not participating in the survey (from 16,457 views in 2021 to 25,581 – a view increase of over 55%). While this is likely due to a number of different factors, we feel this information speaks to the rising interest in Rust and the growing general audience following its evolution.

In 2022, the survey had 11,482 responses, which is a slight decrease of 6.4% from 2021, however, the number of respondents that answered all survey questions has increased year over year. We were interested to see this slight decrease in responses, as this year’s survey was much shorter than in previous years – clearly, survey length is not the only factor driving participation.

Community

We were pleased to offer the survey in 11 languages – more than ever before, with the addition of a Ukrainian translation in 2022. 77% of respondents took this year’s survey in English, 5% in Chinese (simplified), 4% in German and French, 2% in Japanese, Spanish, and Russian, and 1% in Chinese (traditional), Korean, Portuguese, and Ukrainian. This is our lowest percentage of respondents taking the survey in English to date, which is an exciting indication of the growing global nature of our community!

The vast majority of our respondents reported being most comfortable communicating on technical topics in English (93%), followed by Chinese (7%).

Rust user respondents were asked which country they live in. The top 13 countries represented were as follows: United States (25%), Germany (12%), China (7%), United Kingdom (6%), France (5%), Canada (4%), Russia (4%), Japan (3%), Netherlands (3%), Sweden (2%), Australia (2%), Poland (2%), India (2%). Nearly 72.5% of respondents elected to answer this question.

While we see global access to Rust education as a critical goal for our community, we are proud to say that Rust was used all over the world in 2022!

Rust Usage

More people are using Rust than ever before! Over 90% of survey respondents identified as Rust users, and of those using Rust, 47% do so on a daily basis – an increase of 4% from the previous year.

30% of Rust user respondents can write simple programs in Rust, 27% can write production-ready code, and 42% consider themselves productive using Rust.

Of the former Rust users who completed the survey, 30% cited difficulty as the primary reason for giving up while nearly 47% cited factors outside of their control.

Graph: Why did you stop using Rust?

Similarly, 26% of those who did not identify as Rust users cited the perception of difficulty as the primary reason for not having used it, (with 62% reporting that they simply haven’t had the chance to prioritize learning Rust yet).Graph: Why don't you use Rust?

Rust Usage at Work

The growing maturation of Rust can be seen through the increased number of different organizations utilizing the language in 2022. In fact, 29.7% of respondents stated that they use Rust for the majority of their coding work at their workplace, which is a 51.8% increase compared to the previous year.

Graph: Are you using Rust at work?

There are numerous reasons why we are seeing increased use of Rust in professional environments. Top reasons cited for the use of Rust include the perceived ability to write "bug-free software" (86%), Rust's performance characteristics (84%), and Rust's security and safety guarantees (69%). We were also pleased to find that 76% of respondents continue to use Rust simply because they found it fun and enjoyable. (Respondents could select more than one option here, so the numbers don't add up to 100%.)

Graph: Why do you use Rust at work?

Of those respondents that used Rust at work, 72% reported that it helped their team achieve its goals (a 4% increase from the previous year) and 75% have plans to continue using it on their teams in the future.

But like any language being applied in the workplace, Rust’s learning curve is an important consideration; 39% of respondents using Rust in a professional capacity reported the process as “challenging” and 9% of respondents said that adopting Rust at work has “slowed down their team”. However, 60% of productive users felt Rust was worth the cost of adoption overall.Graph: Reasons for using Rust at work

It is exciting to see the continued growth of professional Rust usage and the confidence so many users feel in its performance, control, security and safety, enjoyability, and more!

Supporting the Future of Rust

A key goal of the State of Rust survey is to shed light on challenges, concerns, and priorities Rustaceans are currently sitting with.

Of those respondents who shared their main worries for the future of Rust, 26% have concerns that the developers and maintainers behind Rust are not properly supported – a decrease of more than 30% from the previous year’s findings. One area of focus in the future may be to see how the Project in conjunction with the Rust Foundation can continue to push that number towards 0%.

While 38% have concerns about Rust “becoming too complex”, only a small number of respondents were concerned about documentation, corporate oversight, or speed of evolution. 34% of respondents are not worried about the future of Rust at all.

This year’s survey reflects a 21% decrease in fears about Rust’s usage in the industry since the last survey. Faith in Rust’s staying power and general utility is clearly growing as more people find Rust and become lasting members of the community. As always, we are grateful for your honest feedback and dedication to improving this language for everyone.

Graph: Worries about the future of Rust

Another Round of Thanks

To quote an anonymous survey respondent, “Thanks for all your hard work making Rust awesome!” – Rust wouldn’t exist or continue to evolve for the better without the many Project members and the wider Rust community. Thank you to those who took the time to share their thoughts on the State of Rust in 2022!

Continue Reading…

Rust Blog

Announcing Rust 1.71.1

The Rust team has published a new point release of Rust, 1.71.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.71.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.71.1 stable

Rust 1.71.1 fixes Cargo not respecting the umask when extracting dependencies, which could allow a local attacker to edit the cache of extracted source code belonging to another local user, potentially executing code as another user. This security vulnerability is tracked as CVE-2023-38497, and you can read more about it on the advisory we published earlier today. We recommend all users to update their toolchain as soon as possible.

Rust 1.71.1 also addresses several regressions introduced in Rust 1.71.0, including bash completion being broken for users of Rustup, and thesuspicious_double_ref_op being emitted when calling borrow() even though it shouldn't.

You can find more detailed information on the specific regressions, and other minor fixes, in the release notes.

Contributors to 1.71.1

Many people came together to create Rust 1.71.1. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

Security advisory for Cargo (CVE-2023-38497)

This is a cross-post of the official security advisory. The official advisory contains a signed version with our PGP key, as well.

The Rust Security Response WG was notified that Cargo did not respect the umask when extracting crate archives on UNIX-like systems. If the user downloaded a crate containing files writeable by any local user, another local user could exploit this to change the source code compiled and executed by the current user.

This vulnerability has been assigned CVE-2023-38497.

Overview

In UNIX-like systems, each file has three sets of permissions: for the user owning the file, for the group owning the file, and for all other local users. The "umask" is configured on most systems to limit those permissions during file creation, removing dangerous ones. For example, the default umask on macOS and most Linux distributions only allow the user owning a file to write to it, preventing the group owning it or other local users from doing the same.

When a dependency is downloaded by Cargo, its source code has to be extracted on disk to allow the Rust compiler to read as part of the build. To improve performance, this extraction only happens the first time a dependency is used, caching the pre-extracted files for future invocations.

Unfortunately, it was discovered that Cargo did not respect the umask during extraction, and propagated the permissions stored in the crate archive as-is. If an archive contained files writeable by any user on the system (and the system configuration didn't prevent writes through other security measures), another local user on the system could replace or tweak the source code of a dependency, potentially achieving code execution the next time the project is compiled.

Affected Versions

All Rust versions before 1.71.1 on UNIX-like systems (like macOS and Linux) are affected. Note that additional system-dependent security measures configured on the local system might prevent the vulnerability from being exploited.

Users on Windows and other non-UNIX-like systems are not affected.

Mitigations

We recommend all users to update to Rust 1.71.1, which will be released later today, as it fixes the vulnerability by respecting the umask when extracting crate archives. If you build your own toolchain, patches for 1.71.0 source tarballs are available here.

To prevent existing cached extractions from being exploitable, the Cargo binary included in Rust 1.71.1 or later will purge the caches it tries to access if they were generated by older Cargo versions.

If you cannot update to Rust 1.71.1, we recommend configuring your system to prevent other local users from accessing the Cargo directory, usually located in ~/.cargo:

chmod go= ~/.cargo

Acknowledgments

We want to thank Addison Crump for responsibly disclosing this to us according to the Rust security policy.

We also want to thank the members of the Rust project who helped us disclose the vulnerability: Weihang Lo for developing the fix; Eric Huss for reviewing the fix; Pietro Albini for writing this advisory; Pietro Albini, Manish Goregaokar and Josh Stone for coordinating this disclosure; Josh Triplett, Arlo Siemen, Scott Schafer, and Jacob Finkelman for advising during the disclosure.

Continue Reading…

Rust Blog

Announcing Rust 1.71.0

The Rust team is happy to announce a new version of Rust, 1.71.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.71.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.71.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.71.0 stable

C-unwind ABI

1.71.0 stabilizes C-unwind (and other -unwind suffixed ABI variants1).

The behavior for unforced unwinding (the typical case) is specified in this table from the RFC which proposed this feature. To summarize:

Each ABI is mostly equivalent to the same ABI without -unwind, except that with -unwind the behavior is defined to be safe when an unwinding operation (panic or C++ style exception) crosses the ABI boundary. For panic=unwind, this is a valid way to let exceptions from one language unwind the stack in another language without terminating the process (as long as the exception is caught in the same language from which it originated); for panic=abort, this will typically abort the process immediately.

For this initial stabilization, no change is made to the existing ABIs (e.g."C"), and unwinding across them remains undefined behavior. A future Rust release will amend these ABIs to match the behavior specified in the RFC as the final part in stabilizing this feature (usually aborting at the boundary). Users are encouraged to start using the new unwind ABI variants in their code to remain future proof if they need to unwind across the ABI boundary.

Debugger visualization attributes

1.71.0 stabilizes support for a new attribute, #[debug_visualizer(natvis_file = "...")] and #[debug_visualizer(gdb_script_file = "...")], which allows embedding Natviz descriptions and GDB scripts into Rust libraries to improve debugger output when inspecting data structures created by those libraries. Rust itself has packaged similar scripts for some time for the standard library, but this feature makes it possible for library authors to provide a similar experience to end users.

See the referencefor details on usage.

raw-dylib linking

On Windows platforms, Rust now supports using functions from dynamic libraries without requiring those libraries to be available at build time, using the new kind="raw-dylib” option for #[link].

This avoids requiring users to install those libraries (particularly difficult for cross-compilation), and avoids having to ship stub versions of libraries in crates to link against. This simplifies crates providing bindings to Windows libraries.

Rust also supports binding to symbols provided by DLLs by ordinal rather than named symbol, using the new #[link_ordinal] attribute.

Upgrade to musl 1.2

As previously announced, Rust 1.71 updates the musl version to 1.2.3. Most users should not be affected by this change.

Const-initialized thread locals

Rust 1.59.0 stabilized const initialized thread local support in the standard library, which allows for more optimal code generation. However, until now this feature was missed in release notes anddocumentation. Note that this stabilization does not make const { ... } a valid expression or syntax in other contexts; that is a separate and currently unstablefeature.

use std::cell::Cell;

thread_local! {
    pub static FOO: Cell<u32> = const { Cell::new(1) };
}

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.71.0

Many people came together to create Rust 1.71.0. We couldn't have done it without all of you. Thanks!

  1. List of stabilized ABIs can be found in the stabilization report: https://github.com/rust-lang/rust/issues/74990#issuecomment-1363473645

Continue Reading…

Rust Blog

Announcing regex 1.9

The regex sub-team is announcing the release of regex 1.9. The regex crate is maintained by the Rust project and is the recommended way to use regular expressions in Rust. Its defining characteristic is its guarantee of worst case linear time searches with respect to the size of the string being searched.

Releases of the regex crate aren't normally announced on this blog, but since the majority of its internals have been rewritten in version 1.9, this announcement serves to encourage extra scrutiny. If you run into any problems or performance regressions, please report them on the issue tracker or ask questions on the Discussion forum.

Few API additions have been made, but one worth calling out is theCaptures::extract method that should make getting capture groups in some cases more convenient. Otherwise, the main change folks should see is hopefully faster search times.

You can read more in the CHANGELOG and in a more in depth blog post onregex crate internals as a library.

Continue Reading…

Rust Blog

Rustfmt support for let-else statements

Rustfmt will add support for formatting let-else statements starting with the nightly 2023-07-02 toolchain, and then let-else formatting support should come to stable Rust as part of the 1.72 release.

Overview

let-else statements were stabilized back in 2022 as part of the 1.65.0 release. However, the current and previous versions of Rustfmt did not have formatting support for let-else statements. When Rustfmt encountered a let-else statement it would leave it alone and maintain the manual styling originally authored by the developer.

After updating to one of the toolchains with let-else formatting support, you may notice that cargo fmt/rustfmt invocations want to "change" the formatting of your let-else statements. However, this isn't actually a "change" in formatting, but instead is simply Rustfmt applying the let-else formatting rules for the very first time.

Rustfmt support for let-else statements has been a long standing request, and the Project has taken a number of steps to prevent a recurrence of the delay between feature stabilization and formatting support, as well as putting additional procedures in place which should enable more expeditious formatting support for nightly-only syntax.

Background and Context

Rust has an official Style Guide that articulates the default formatting style for Rust code. The Style Guide functions as a specification that defines the default formatting behavior for Rustfmt, and Rustfmt's primary mission is to provide automated formatting capabilities based around that Style Guide specification. Rustfmt is a direct consumer of the Style Guide, but Rustfmt does not unilaterally dictate what the default formatting style of language constructs should be.

The initial Style Guide was developed many years ago (beginning in 2016), and was driven by a Style Team in collaboration with the community through an RFC process. The Style Guide was then made official in 2018 via RFC 2436.

That initial Style Team was more akin to a Project Working Group in today's terms, as they had a fixed scope with a main goal to simply pull together the initial Style Guide. Accordingly that initial Style Team was disbanded once the Guide was made official.

There was subsequently no designated group within the Rust Project that was explicitly responsible for the Style Guide, and no group explicitly focused on determining the official Style for new language constructs.

The absence of a team/group with ownership of the Style Guide didn't really cause problems at first, as the new syntax that came along during the first few years was comparatively non-controversial when it came to default style and formatting. However, over time challenges started to develop when there was increasingly less community consensus and no governing team within the Project to make the final decision about how new language syntax should be styled.

This was certainly the case with let-else statements, with lots of varying perspectives on how they should be styled. Without any team/group to make the decision and update the Style Guide with the official rules for let-else statements, Rustfmt was blocked and was unable to proceed.

These circumstances around let-else statements resulted in a greater understanding across the Project of the need to establish a team to own and maintain the Style Guide. However, it was also well understood that spinning up a new team and respective processes would take some time, and the decision was made to not block the stabilization of features that were otherwise fully ready to be stabilized, like let-else statements, in the nascency of such a new team and new processes.

Accordingly, let-else statements were stabilized and released without formatting support and with an understanding that the new Style Team and then subsequently the Rustfmt Team would later complete the requisite work required to incorporate formatting support.

Steps Taken

A number of steps have been taken to improve matters in this space. This includes steps to address the aforementioned issues and deal with some of the "style debt" that accrued over the years in the absence of a Style Team, and also to establish new processes and mechanisms to bring about other formatting/styling improvements.

  • Launched a new, permanent Style Team that's responsible for the Style Guide.
  • Established a mechanism to evolve the default style while still maintaining stability guarantees (RFC 3338).
  • Developed a nightly-syntax-policy that provides clarity around style rules for unstable/nightly-only syntax, and enables Rustfmt to provide earlier support for such syntax.

Furthermore, the Style Team is also continuing to diligently work through the backlog of those "style debt" items, and the Rustfmt team is in turn actively working on respective formatting implementation. The Rustfmt team is also focused on growing the team in order to improve contributor and review capacity.

Conclusion

We know that many have wanted let-else formatting support for a while, and we're sorry it's taken this long. We also recognize that Rustfmt now starting to format let-else statements may cause some formatting churn, and that's a highly undesirable scenario we strive to avoid.

However, we believe the benefits of delivering let-else formatting support outweigh those drawbacks. While it's possible there may be another future case or two where we have to do something similar as we work through the style backlog, we're hopeful that over time this new team and these new processes will reduce (or eliminate) the possibility of a recurrence by addressing the historical problems that played such an outsize role in the let-else delay, and also bring about various other improvements.

Both the Style and Rustfmt teams hang out on Zulip so if you'd like to get more involved or have any questions please drop by on T-Style and/or T-Rustfmt.

Continue Reading…

Rust Blog

Improved API tokens for crates.io

If you recently generated a new API token on crates.io, you might have noticed our new API token creation page and some of the new features it now supports.

Previously, when clicking the "New Token" button on https://crates.io/settings/tokens, you were only provided with the option to choose a token name, without any additional choices. We knew that we wanted to offer our users more flexibility, but in the previous user interface that would have been difficult, so our first step was to build a proper "New API Token" page.

Our roadmap included two essential features known as "token scopes". The first of them allows you to restrict API tokens to specific operations. For instance, you can configure a token to solely enable the publishing of new versions for existing crates, while disallowing the creation of new crates. The second one offers an optional restriction where tokens can be limited to only work for specific crate names. If you want to read more about how these features were planned and implemented you can take a look at our correspondingtracking issue.

To further enhance the security of crates.io API tokens, we prioritized the implementation of expiration dates. Since we had already touched most of the token-related code this was relatively straight-forward. We are delighted to announce that our "New API Token" page now supports endpoint scopes, crate scopes and expiration dates:

Screenshot of the "New API Token" page

Similar to the API token creation process on github.com, you can choose to not have any expiration date, use one of the presets, or even choose a custom expiration date to suit your requirements.

If you come across any issues or have questions, feel free to reach out to us onZulipor open an issue on GitHub.

Lastly, we, the crates.io team, would like to express our gratitude to theOpenSSF's Alpha-Omega Initiativeand JFrogfor their contributions to the Rust Foundationsecurity initiative. Their support has been instrumental in enabling us to implement these features and undertake extensive security-related work on the crates.io codebase over the past few months.

Continue Reading…

Rust Blog

Introducing the Rust Leadership Council

As of today, RFC 3392 has been merged, forming the new top level governance body of the Rust Project: the Leadership Council. The creation of this Council marks the end of both the Core Team and the interim Leadership Chat.

The Council will assume responsibility for top-level governance concerns while most of the responsibilities of the Rust Project (such as maintenance of the compiler and core tooling, evolution of the language and standard libraries, administration of infrastructure, etc.) remain with the nine top level teams.

Each of these top level teams, as defined in the RFC, has chosen a representative who collectively form the Council:

  • Compiler: Eric Holk
  • Crates.io: Carol (Nichols || Goulding)
  • Dev Tools: Eric Huss
  • Infrastructure: Ryan Levick
  • Language: Jack Huey
  • Launching Pad1: Jonathan Pallant
  • Library: Mara Bos
  • Moderation: Khionu Sybiern
  • Release: Mark Rousskov

First, we want to take a moment to thank the Core Team and interim Leadership Chat for the hard work they've put in over the years. Their efforts have been critical for the Rust Project. However, we do recognize that the governance of the Rust Project has had its shortcomings. We hope to build on the successes and improve upon the failures to ultimately lead to greater transparency and accountability.

We know that there is a lot of work to do and we are eager to get started. In the coming weeks we will be establishing the basic infrastructure for the group, including creating a plan for regular meetings and a process for raising agenda items, setting up a team repository, and ultimately completing the transition from the former Rust leadership structures.

We will post more once this bootstrapping process has been completed.

  1. The RFC defines the launching pad team as a temporary umbrella team to represent subteams that do not currently have a top-level team.

Continue Reading…

Rust Blog

Announcing Rust 1.70.0

The Rust team is happy to announce a new version of Rust, 1.70.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.70.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.70.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.70.0 stable

Sparse by default for crates.io

Cargo's "sparse" protocol is now enabled by default for reading the index from crates.io. This feature was previously stabilized with Rust 1.68.0, but still required configuration to use that with crates.io. The announced plan was to make that the default in 1.70.0, and here it is!

You should see substantially improved performance when fetching information from the crates.io index. Users behind a restrictive firewall will need to ensure that access to https://index.crates.io is available. If for some reason you need to stay with the previous default of using the git index hosted by GitHub, the registries.crates-io.protocol config setting can be used to change the default.

One side-effect to note about changing the access method is that this also changes the path to the crate cache, so dependencies will be downloaded anew. Once you have fully committed to using the sparse protocol, you may want to clear out the old $CARGO_HOME/registry/*/github.com-* paths.

OnceCell and OnceLock

Two new types have been stabilized for one-time initialization of shared data, OnceCell and its thread-safe counterpart OnceLock. These can be used anywhere that immediate construction is not wanted, and perhaps not even possible like non-const data in global variables.

use std::sync::OnceLock;

static WINNER: OnceLock<&str> = OnceLock::new();

fn main() {
    let winner = std::thread::scope(|s| {
        s.spawn(|| WINNER.set("thread"));

        std::thread::yield_now(); // give them a chance...

        WINNER.get_or_init(|| "main")
    });

    println!("{winner} wins!");
}

Crates such as lazy_static and once_cell have filled this need in the past, but now these building blocks are part of the standard library, ported from once_cell's unsync and sync modules. There are still more methods that may be stabilized in the future, as well as companion LazyCell and LazyLock types that store their initializing function, but this first step in stabilization should already cover many use cases.

IsTerminal

This newly-stabilized trait has a single method, is_terminal, to determine if a given file descriptor or handle represents a terminal or TTY. This is another case of standardizing functionality that existed in external crates, like atty and is-terminal, using the C library isatty function on Unix targets and similar functionality elsewhere. A common use case is for programs to distinguish between running in scripts or interactive modes, like presenting colors or even a full TUI when interactive.

use std::io::{stdout, IsTerminal};

fn main() {
    let use_color = stdout().is_terminal();
    // if so, add color codes to program output...
}

Named levels of debug information

The -Cdebuginfo compiler option has previously only supported numbers 0..=2 for increasing amounts of debugging information, where Cargo defaults to 2 in dev and test profiles and 0 in release and bench profiles. These debug levels can now be set by name: "none" (0), "limited" (1), and "full" (2), as well as two new levels, "line-directives-only" and "line-tables-only".

The Cargo and rustc documentation both called level 1 "line tables only" before, but it was more than that with information about all functions, just not types and variables. That level is now called "limited", and the new "line-tables-only" level is further reduced to the minimum needed for backtraces with filenames and line numbers. This may eventually become the level used for -Cdebuginfo=1. The other line-directives-only level is intended for NVPTX profiling, and is otherwise not recommended.

Note that these named options are not yet available to be used via Cargo.toml. Support for that will be available in the next release 1.71.

Enforced stability in the test CLI

When #[test] functions are compiled, the executable gets a command-line interface from the test crate. This CLI has a number of options, including some that are not yet stabilized and require specifying -Zunstable-options as well, like many other commands in the Rust toolchain. However, while that's only intended to be allowed in nightly builds, that restriction wasn't active in test -- until now. Starting with 1.70.0, stable and beta builds of Rust will no longer allow unstable test options, making them truly nightly-only as documented.

There are known cases where unstable options may have been used without direct user knowledge, especially --format json used in IntelliJ Rust and other IDE plugins. Those projects are already adjusting to this change, and the status of JSON output can be followed in its tracking issue.

Stabilized APIs

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.70.0

Many people came together to create Rust 1.70.0. We couldn't have done it without all of you. Thanks!

Continue Reading…