Rust Blog: Posts

Rust Blog

Announcing Rust 1.92.0

The Rust team is happy to announce a new version of Rust, 1.92.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.92.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.92.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.92.0 stable

Deny-by-default never type lints

The language and compiler teams continue to work on stabilization of the never type. In this release the never_type_fallback_flowing_into_unsafe and dependency_on_unit_never_type_fallback future compatibility lints were made deny-by-default, meaning they will cause a compilation error when detected.

It's worth noting that while this can result in compilation errors, it is still a lint; these lints can all be #[allow]ed. These lints also will only fire when building the affected crates directly, not when they are built as dependencies (though a warning will be reported by Cargo in such cases).

These lints detect code which is likely to be broken by the never type stabilization. It is highly advised to fix them if they are reported in your crate graph.

We believe there to be approximately 500 crates affected by this lint. Despite that, we believe this to be acceptable, as lints are not a breaking change and it will allow for stabilizing the never type in the future. For more in-depth justification, see the Language Team's assessment.

unused_must_use no longer warns about Result<(), UninhabitedType>

Rust's unused_must_use lint warns when ignoring the return value of a function, if the function or its return type is annotated with #[must_use]. For instance, this warns if ignoring a return type of Result, to remind you to use ?, or something like .expect("...").

However, some functions return Result, but the error type they use is not actually "inhabited", meaning you cannot construct any values of that type (e.g. the ! or Infallible types).

The unused_must_use lint now no longer warns on Result<(), UninhabitedType>, or on ControlFlow<UninhabitedType, ()>. For instance, it will not warn on Result<(), Infallible>. This avoids having to check for an error that can never happen.

use core::convert::Infallible;
fn can_never_fail() -> Result<(), Infallible> {
    // ...
    Ok(())
}

fn main() {
    can_never_fail();
}

This is particularly useful with the common pattern of a trait with an associated error type, where the error type may sometimes be infallible:

trait UsesAssocErrorType {
    type Error;
    fn method(&self) -> Result<(), Self::Error>;
}

struct CannotFail;
impl UsesAssocErrorType for CannotFail {
    type Error = core::convert::Infallible;
    fn method(&self) -> Result<(), Self::Error> {
        Ok(())
    }
}

struct CanFail;
impl UsesAssocErrorType for CanFail {
    type Error = std::io::Error;
    fn method(&self) -> Result<(), Self::Error> {
        Err(std::io::Error::other("something went wrong"))
    }
}

fn main() {
    CannotFail.method(); // No warning
    CanFail.method(); // Warning: unused `Result` that must be used
}

Emit unwind tables even when -Cpanic=abort is enabled on linux

Backtraces with -Cpanic=abort previously worked in Rust 1.22 but were broken in Rust 1.23, as we stopped emitting unwind tables with -Cpanic=abort. In Rust 1.45 a workaround in the form of -Cforce-unwind-tables=yes was stabilized.

In Rust 1.92 unwind tables will be emitted by default even when -Cpanic=abort is specified, allowing for backtraces to work properly. If unwind tables are not desired then users should use -Cforce-unwind-tables=no to explicitly disable them being emitted.

Validate input to #[macro_export]

Over the past few releases, many changes were made to the way built-in attributes are processed in the compiler. This should greatly improve the error messages and warnings Rust gives for built-in attributes and especially make these diagnostics more consistent among all of the over 100 built-in attributes.

To give a small example, in this release specifically, Rust became stricter in checking what arguments are allowed to macro_export by upgrading that check to a "deny-by-default lint" that will be reported in dependencies.

Stabilized APIs

These previously stable APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.92.0

Many people came together to create Rust 1.92.0. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

Making it easier to sponsor Rust contributors

TLDR: You can now find a list of Rust contributors that you can sponsor on this page.

Same as with many other open-source projects, Rust depends on a large number of contributors, many of whom make Rust better on a volunteer basis or are funded only for a fraction of their open-source contributions.

Supporting these contributors is vital for the long-term health of the Rust language and its toolchain, so that it can keep its current level of quality, but also evolve going forward. Of course, this is nothing new, and there are currently several ongoing efforts to provide stable and sustainable funding for Rust maintainers, such as the Rust Foundation Maintainer Fund or the RustNL Maintainers Fund. We are very happy about that!

That being said, there are multiple ways of supporting the development of Rust. One of them is sponsoring individual Rust contributors directly, through services like GitHub Sponsors. This makes it possible even for individuals or small companies to financially support their favourite contributors. Every bit of funding helps!

Previously, if you wanted to sponsor someone who works on Rust, you had to go on a detective hunt to figure out who are the people contributing to the Rust toolchain, if they are receiving sponsorships and through which service. This was a lot of work that could provide a barrier to sponsorships. So we simplified it!

Now we have a dedicated Funding page on the Rust website, which helpfully shows members of the Rust Project that are currently accepting funds through sponsoring1. You can click on the name of a contributor to find out what teams they are a part of and what kind of work they do in the Rust Project.

Note that the list of contributors accepting funding on this page is non-exhaustive. We made it opt in, so that contributors can decide on their own whether they want to be listed there or not.

If you ever wanted to support the development of Rust "in the small", it is now simpler than ever.

  1. The order of people on the funding page is shuffled on every page load to reduce unnecessary ordering bias.

Continue Reading…

Rust Blog

Updating Rust's Linux musl targets to 1.2.5

Updating Rust's Linux musl targets to 1.2.5

Beginning with Rust 1.93 (slated for stable release on 2026-01-22), the various *-linux-musl targets will all ship with musl 1.2.5. This primarily affects static musl builds for x86_64, aarch64, and powerpc64le which bundled musl 1.2.3. This update comes with several fixes and improvements, and a breaking change that affects the Rust ecosystem.

For the Rust ecosystem, the primary motivation for this update is to receive major improvements to musl's DNS resolver which shipped in 1.2.4 and received bug fixes in 1.2.5. When using musl targets for static linking, this should make portable linux binaries that do networking more reliable, particularly in the face of large DNS records and recursive nameservers.

However, 1.2.4 also comes with a breaking change: the removal of several legacy compatibility symbols that the Rust libc crate was using. A fix for this was shipped in libc 0.2.146 in June 2023 (2 years ago), and we have been waiting for newer versions of the libc crate to propagate throughout the ecosystem before shipping the musl update.

A crater run in July 2024 found only about 2.4% of Rust projects were still affected. A crater run in June 2025 found 1.5% of Rust projects were affected. Most of that change is from crater analyzing More Rust Projects. The absolute amount of broken projects went down by 15% while the absolute amount of analyzed projects went up by 35%.

At this point we expect there will be minimal breakage, and most breakage should be resolved by a cargo update. We believe this update shouldn't be held back any longer, as it contains critical fixes for the musl target.

Manual inspection of some of the affected projects indicates they largely haven't run cargo update in 2 years, often because they haven't had any changes in 2 years. Fixing these crates is as easy as cargo update.

Build failures from this change will typically look like "some `extern` functions couldn't be found; some native libraries may need to be installed or have their path specified", often specifically for "undefined reference to `open64'", often while trying to build very old versions of the getrandom crate (hence the outsized impact on gamedev projects that haven't updated their dependencies in several years in particular):

Example Build Failure

[INFO] [stderr]    Compiling guess_the_number v0.1.0 (/opt/rustwide/workdir)
[INFO] [stdout] error: linking with `cc` failed: exit status: 1
[INFO] [stdout]   |
[INFO] [stdout]   = note:  "cc" "-m64" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/rcrt1.o" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/crti.o" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/crtbeginS.o" "/tmp/rustcMZMWZW/symbols.o" "<2 object files omitted>" "-Wl,--as-needed" "-Wl,-Bstatic" "/opt/rustwide/target/x86_64-unknown-linux-musl/debug/deps/{librand-bff7d8317cf08aa0.rlib,librand_chacha-612027a3597e9138.rlib,libppv_lite86-742ade976f63ace4.rlib,librand_core-be9c132a0f2b7897.rlib,libgetrandom-dc7f0d82f4cb384d.rlib,liblibc-abed7616303a3e0d.rlib,libcfg_if-66d55f6b302e88c8.rlib}.rlib" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/{libstd-*,libpanic_unwind-*,libobject-*,libmemchr-*,libaddr2line-*,libgimli-*,librustc_demangle-*,libstd_detect-*,libhashbrown-*,librustc_std_workspace_alloc-*,libminiz_oxide-*,libadler2-*,libunwind-*}.rlib" "-lunwind" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/{libcfg_if-*,liblibc-*}.rlib" "-lc" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/{librustc_std_workspace_core-*,liballoc-*,libcore-*,libcompiler_builtins-*}.rlib" "-L" "/tmp/rustcMZMWZW/raw-dylibs" "-Wl,-Bdynamic" "-Wl,--eh-frame-hdr" "-Wl,-z,noexecstack" "-nostartfiles" "-L" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained" "-L" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib" "-o" "/opt/rustwide/target/x86_64-unknown-linux-musl/debug/deps/guess_the_number-41a068792b5f051e" "-Wl,--gc-sections" "-static-pie" "-Wl,-z,relro,-z,now" "-nodefaultlibs" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/crtendS.o" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/crtn.o"
[INFO] [stdout]   = note: some arguments are omitted. use `--verbose` to show all linker arguments
[INFO] [stdout]   = note: /usr/bin/ld: /opt/rustwide/target/x86_64-unknown-linux-musl/debug/deps/libgetrandom-dc7f0d82f4cb384d.rlib(getrandom-dc7f0d82f4cb384d.getrandom.828c5c30a8428cf4-cgu.0.rcgu.o): in function `getrandom::util_libc::open_readonly':
[INFO] [stdout]           /opt/rustwide/cargo-home/registry/src/index.crates.io-1949cf8c6b5b557f/getrandom-0.2.8/src/util_libc.rs:150:(.text._ZN9getrandom9util_libc13open_readonly17hdc55d6ead142a889E+0xbc): undefined reference to `open64'
[INFO] [stdout]           collect2: error: ld returned 1 exit status
[INFO] [stdout]           
[INFO] [stdout]   = note: some `extern` functions couldn't be found; some native libraries may need to be installed or have their path specified
[INFO] [stdout]   = note: use the `-l` flag to specify native libraries to link
[INFO] [stdout]   = note: use the `cargo:rustc-link-lib` directive to specify the native libraries to link with Cargo (see https://doc.rust-lang.org/cargo/reference/build-scripts.html#rustc-link-lib)
[INFO] [stdout] 
[INFO] [stdout] 
[INFO] [stderr] error: could not compile `guess_the_number` (bin "guess_the_number") due to 1 previous error

Updated targets

All Rust musl targets that bundle a copy of musl now bundle 1.2.5. All Rust musl targets now require musl 1.2.5 at a minimum.

The mostly only actually impacts the three "Tier 2 With Host Tools" musl targets which were pinned to musl 1.2.3:

  • aarch64-unknown-linux-musl
  • x86_64-unknown-linux-musl
  • powerpc64le-unknown-linux-musl

The fourth target at this level of support, loongarch64-unknown-linux-musl, is so new that it was always on musl 1.2.5.

Due to an apparent configuration oversight with crosstool-ng, all other targets were already bundling musl 1.2.5. These targets were silently upgraded to musl 1.2.4 in Rust 1.74.0 and silently upgraded to musl 1.2.5 in Rust 1.86. This oversight has been rectified and all targets have been pinned to musl 1.2.5 to prevent future silent upgrades (but hey, no one noticing bodes well for the ecosystem impact of this change). Their documentation has now been updated to reflect the fact that bundling 1.2.5 is actually intentional, and that 1.2.5 is now considered a minimum requirement.

Here are all the updated definitions:

Tier 2 with Host Tools

| target | notes | | ------------------------------------ | --------------------------------------- | | aarch64-unknown-linux-musl | ARM64 Linux with musl 1.2.5 | | powerpc64le-unknown-linux-musl | PPC64LE Linux (kernel 4.19, musl 1.2.5) | | x86_64-unknown-linux-musl | 64-bit Linux with musl 1.2.5 |

Tier 2 without Host Tools

| target | std | notes | | ---------------------------------- | --- | ------------------------------------------- | | arm-unknown-linux-musleabi | ✓ | Armv6 Linux with musl 1.2.5 | | arm-unknown-linux-musleabihf | ✓ | Armv6 Linux with musl 1.2.5, hardfloat | | armv5te-unknown-linux-musleabi | ✓ | Armv5TE Linux with musl 1.2.5 | | armv7-unknown-linux-musleabi | ✓ | Armv7-A Linux with musl 1.2.5 | | armv7-unknown-linux-musleabihf | ✓ | Armv7-A Linux with musl 1.2.5, hardfloat | | i586-unknown-linux-musl | ✓ | 32-bit Linux (musl 1.2.5, original Pentium) | | i686-unknown-linux-musl | ✓ | 32-bit Linux with musl 1.2.5 (Pentium 4) | | riscv64gc-unknown-linux-musl | ✓ | RISC-V Linux (kernel 4.20+, musl 1.2.5) |

Tier 3

| target | std | host | notes | | ------------------------------------ | --- | --------------------------------------------------------------- | ------------------------------------- | | hexagon-unknown-linux-musl | ✓ | Hexagon Linux with musl 1.2.5 | | | mips-unknown-linux-musl | ✓ | MIPS Linux with musl 1.2.5 | | | mips64-openwrt-linux-musl | ? | MIPS64 for OpenWrt Linux musl 1.2.5 | | | mips64-unknown-linux-muslabi64 | ✓ | MIPS64 Linux, N64 ABI, musl 1.2.5 | | | mips64el-unknown-linux-muslabi64 | ✓ | MIPS64 (little endian) Linux, N64 ABI, musl 1.2.5 | | | mipsel-unknown-linux-musl | ✓ | MIPS (little endian) Linux with musl 1.2.5 | | | powerpc-unknown-linux-musl | ? | PowerPC Linux with musl 1.2.5 | | | powerpc-unknown-linux-muslspe | ? | PowerPC SPE Linux with musl 1.2.5 | | | powerpc64-unknown-linux-musl | ✓ | ✓ | PPC64 Linux (kernel 4.19, musl 1.2.5) | | riscv32gc-unknown-linux-musl | ? | RISC-V Linux (kernel 5.4, musl 1.2.5 + RISCV32 support patches) | | | s390x-unknown-linux-musl | ✓ | S390x Linux (kernel 3.2, musl 1.2.5) | | | thumbv7neon-unknown-linux-musleabihf | ? | Thumb2-mode Armv7-A Linux with NEON, musl 1.2.5 | | | x86_64-unikraft-linux-musl | ✓ | 64-bit Unikraft with musl 1.2.5 | |

Continue Reading…

Rust Blog

crates.io: Malicious crates finch-rust and sha-rust

Summary

On December 5th, the crates.io team was notified by Kush Pandya from the Socket Threat Research Team of two malicious crates which were trying to cause confusion with the existing finch crate but adding a dependency on a malicious crate doing data exfiltration.

These crates were:

  • finch-rust - 1 version published November 25, 2025, downloaded 28 times, used sha-rust as a dependency
  • sha-rust - 8 versions published between November 20 and November 25, 2025, downloaded 153 times

Actions taken

The user in question, face-lessssss, was immediately disabled, and the crates in question were deleted from crates.io shortly after. We have retained the malicious crate files for further analysis.

The deletions were performed at 15:52 UTC on December 5th.

We reported the associated repositories to GitHub and the account has been removed there as well.

Analysis

Socket has published their analysis in a blog post.

These crates had no dependent downstream crates on crates.io, and there is no evidence of either of these crates being downloaded outside of automated mirroring and scanning services.

Thanks

Our thanks to Kush Pandya from the Socket Threat Research Team for reporting the crates. We also want to thank Carol Nichols from the crates.io team and Adam Harvey from the Rust Foundation for aiding in the response.

Continue Reading…

Rust Blog

crates.io: Malicious crates evm-units and uniswap-utils

Summary

On December 2nd, the crates.io team was notified by Olivia Brown from the Socket Threat Research Team of two malicious crates which were downloading a payload that was likely attempting to steal cryptocurrency.

These crates were:

  • evm-units - 13 versions published in April 2025, downloaded 7257 times
  • uniswap-utils - 14 versions published in April 2025, downloaded 7441 times, used evm-units as a dependency

Actions taken

The user in question, ablerust, was immediately disabled, and the crates in question were deleted from crates.io shortly after. We have retained the malicious crate files for further analysis.

The deletions were performed at 22:01 UTC on December 2nd.

Analysis

Socket has published their analysis in a blog post.

These crates had no dependent downstream crates on crates.io.

Thanks

Our thanks to Olivia Brown from the Socket Threat Research Team for reporting the crates. We also want to thank Carol Nichols from the crates.io team and Walter Pearce and Adam Harvey from the Rust Foundation for aiding in the response.

Continue Reading…

Rust Blog

Lessons learned from the Rust Vision Doc process

Starting earlier this year, a group of us set on a crazy quest: to author a "Rust vision doc". As we described it in the original project goal proposal:

The Rust Vision Doc will summarize the state of Rust adoption -- where is Rust adding value? what works well? what doesn't? -- based on conversations with individual Rust users from different communities, major Rust projects, and companies large and small that are adopting Rust.

Over the course of this year, the Vision Doc group has gathered up a lot of data. We began with a broad-based survey that got about 4200 responses. After that, we conducted over 70 interviews, each one about 45 minutes, with as broad a set of Rust users as we could find1.

This is the first of a series of blog posts covering what we learned throughout that process and what recommendations we have to offer as a result. This first post is going to go broad. We'll discuss the process we used and where we think it could be improved going forward. We'll talk about some of the big themes we heard -- some that were surprising and others that were, well, not surprising at all. Finally, we'll close with some recommendations for how the project might do more work like this in the future.

The questions we were trying to answer

One of the first things we did in starting out with the vision doc was to meet with a User Research expert, Holly Ellis, who gave us a quick tutorial on how User Research works2. Working with her, we laid out a set of research questions that we wanted to answer. Our first cut was very broad, covering three themes:

  • Rust the technology:
    • "How does Rust fit into the overall language landscape? What is Rust's mission?"
    • "What brings people to Rust and why do they choose to use it for a particular problem...?"
    • "What would help Rust to succeed in these domains...?" (e.g., network systems, embedded)
    • "How can we scale Rust to industry-wide adoption? And how can we ensure that, as we do so, we continue to have a happy, joyful open-source community?"
  • Rust the global project:
    • "How can we improve the experience of using Rust for people across the globe?"
    • "How can we improve the experience of contributing to and maintaining Rust for people across the globe?"
  • Rust the open-source project:
    • "How can we tap into the knowledge, experience, and enthusiasm of a growing Rust userbase to improve Rust?"
    • "How can we ensure that individual or volunteer Rust maintainers are well-supported?"
    • "What is the right model for Foundation-project interaction?"

Step 1: Broad-based survey

Before embarking on individual interviews, we wanted to get a broad snapshot of Rust usage. We also wanted to find a base of people that we could talk to. We created a survey that asked a few short "demographic" questions -- e.g., where does the respondent live, what domains do they work on, how would they rate their experience -- and some open-ended questions about their journey to Rust, what kind of projects they feel are a good fit for Rust, what they found challenging when learning, etc. It also asked for (optional) contact information.

We got a LOT of responses -- over 4200! Analyzing this much data is not easy, and we were very grateful to Kapiche, who offered us free use of their tool to work through the data. ❤

The survey is useful in two ways. First, it's an interesting data-set in its own right, although you have to be aware of selection bias. Second, the survey also gave us something that we can use to cross-validate some of what we heard in 1:1 interviews and to look for themes we might otherwise have missed. And of course it gave us additional names of people we can talk to (though most respondents didn't leave contact information).

Step 2: Interviewing individuals

The next step after the survey was to get out there and talk to people. We sourced people from a lot of places: the survey and personal contacts, of course, but we also sat down with people at conferences and went to meetups. We even went to a Python meetup in an effort to find people who were a bit outside the usual "Rust circle".

When interviewing people, the basic insight of User Experience research is that you don't necessarily ask people the exact questions you want to answer. That is likely to get them speculating and giving you the answer that they think they "ought" to say. Instead, you come at it sideways. You ask them factual, non-leading questions. In other words, you certainly don't say, "Do you agree the borrow checker is really hard?" And you probably don't even say, "What is the biggest pain point you had with Rust?" Instead, you might say, "What was the last time you felt confused by an error message?" And then go from there, "Is this a typical example? If not, what's another case where you felt confused?"

To be honest, these sorts of "extremely non-leading questions" are kind of difficult to do. But they can uncover some surprising results.

We got answers -- but not all the answers we wanted

4200 survey responses and 70 interviews later, we got a lot of information -- but we still don't feel like we have the answers to some of the biggest questions. Given the kinds of questions we asked, we got a pretty good view on the kinds of things people love about Rust and what it offers relative to other languages. We got a sense for the broad areas that people find challenging. We also learned a few things about how the Rust project interacts with others and how things vary across the globe.

What we really don't have is enough data to say "if you do X, Y, and Z, that will really unblock Rust adoption in this domain". We just didn't get into enough technical detail, for example, to give guidance on which features ought to be prioritized, or to help answer specific design questions that the lang or libs team may consider.

One big lesson: there are only 24 hours in a day

One of the things we learned was that you need to stay focused. There were so many questions we wanted to ask, but only so much time in which to do so. Ultimately, we wound up narrowing our scope in several ways:

  • we focused primarily on the individual developer experience, and only had minimal discussion with companies as a whole;
  • we dove fairly deep into one area (the Safety Critical domain) but didn't go as deep into the details of other domains;
  • we focused primarily on Rust adoption, and in particular did not even attempt to answer the questions about "Rust the open-source project".

Another big lesson: haters gonna... stay quiet?

One thing we found surprisingly difficult was finding people to interview who didn't like Rust. 49% of survey respondents, for example, rated their Rust comfort as 4 or 5 out of 5, and only 18.5% said 1 or 2. And of those, only a handful gave contact information.

It turns out that people who think Rust isn't worth using mostly don't read the Rust blog or want to talk about that with a bunch of Rust fanatics.3 This is a shame, of course, as likely those folks have a lot to teach us about the boundaries of where Rust adds value. We are currently doing some targeted outreach in an attempt to grow our scope here, so stay tuned, we may get more data.

One fun fact: enums are Rust's underappreciated superpower

We will do a deeper dive into the things people say that they like about Rust later (hint: performance and reliability both make the cut). One interesting thing we found was the number of people that talked specifically about Rust enums, which allow you to package up the state of your program along with the data it has available in that state. Enums are a concept that Rust adapted from functional languages like OCaml and Haskell and fit into the system programming setting.

"The usage of Enum is a new concept for me. And I like this concept. It's not a class and it's not just a boolean, limited to false or true. It has different states." -- New Rust developer

"Tagged unions. I don't think I've seriously used another production language which has that. Whenever I go back to a different language I really miss that as a way of accurately modeling the domain." -- Embedded developer

Where do we go from here? Create a user research team

When we set out to write the vision doc, we imagined that it would take the form of an RFC. We imagined that RFC identifying key focus areas for Rust and making other kinds of recommendations. Now that we've been through it, we don't think we have the data we need to write that kind of RFC (and we're also not sure if that's the right kind of RFC to write). But we did learn a lot and we are convinced of the importance of this kind of work.

Therefore, our plan is to do the following. First, we're going to write-up a series of blog posts diving into what we learned about our research questions along with other kinds of questions that we encountered as we went.

Second, we plan to author an RFC proposing a dedicated user research team for the Rust org. The role of this team would be to gather data of all forms (interviews, surveys, etc) and make it available to the Rust project. And whenever they can, they would help to connect Rust customers directly with people extending and improving Rust.

The vision doc process was in many ways our first foray into this kind of research, and it taught us a few things:

  • First, we have to go broad and deep. For this first round, we focused on high-level questions about people's experiences with Rust, and we didn't get deep into technical blockers. This gives us a good overview but limits the depth of recommendations we can make.
  • Second, to answer specific questions we need to do specific research. One of our hypotheses was that we could use UX interviews to help decide thorny questions that come up in RFCs -- e.g., the notorious debate between await x and x.await from yesteryear. What we learned is "sort of". The broad interviews we did did give us information about what kinds of things are important to people (e.g., convenience vs reliability, and so forth), and we'll cover some of that in upcoming write-ups. But to shed light on specific questions (e.g., "will x.await be confused for a field access") will really require more specific research. This may be interviews but it could also be other kinds of tests. These are all things though that a user research team could help with.
  • Third, we should find ways to "open the data" and publish results incrementally. We conducted all of our interviews with a strong guarantee of privacy and we expect to delete the information we've gathered once this project wraps up. Our goal was to ensure people could talk in an unfiltered way. This should always be an option we offer people -- but that level of privacy has a cost, which is that we are not able to share the raw data, even widely across the Rust teams, and (worse) people have to wait for us to do analysis before they can learn anything. This won't work for a long-running team. At the same time, even for seemingly innocuous conversations, posting full transcripts of conversations openly on the internet may not be the best option, so we need to find a sensible compromise.
  1. "As wide a variety of Rust users as we could find" -- the last part is important. One of the weaknesses of this work is that we wanted to hear from more Rust skeptics than we did.
  2. Thanks Holly! We are ever in your debt.
  3. Shocking, I know. But, actually, it is a little -- most programmers love telling you how much they hate everything you do, in my experience?

Continue Reading…

Rust Blog

Switching to Rust's own mangling scheme on nightly

TL;DR: rustc will use its own "v0" mangling scheme by default on nightly versions instead of the previous default, which re-used C++'s mangling scheme, starting in nightly-2025-11-21

Context

When Rust is compiled into object files and binaries, each item (functions, statics, etc) must have a globally unique "symbol" identifying it.

In C, the symbol name of a function is just the name that the function was defined with, such as strcmp. This is straightforward and easy to understand, but requires that each item have a globally unique name that doesn't overlap with any symbols from libraries that it is linked against. If two items had the same symbol then when the linker tried to resolve a symbol to an address in memory (of a function, say), then it wouldn't know which symbol is the correct one.

Languages like Rust and C++ define "symbol mangling schemes", leveraging information from the type system to give each item a unique symbol name. Without this, it would be possible to produce clashing symbols in a variety of ways - for example, every instantiation of a generic or templated function (or an overload in C++), which all have the same name in the surface language would end up with clashing symbols; or the same name in different modules, such as a::foo and b::foo would have clashing symbols.

Rust originally used a symbol mangling scheme based on theItanium ABI's name mangling scheme used by C++ (sometimes). Over the years, it was extended in an inconsistent and ad-hoc way to support Rust features that the mangling scheme wasn't originally designed for. Rust's current legacy mangling scheme has a number of drawbacks:

  • Information about generic parameter instantiations is lost during mangling
  • It is internally inconsistent - some paths use an Itanium ABI-style encoding but some don't
  • Symbol names can contain . characters which aren't supported on all platforms
  • Symbol names include an opaque hash which depends on compiler internals and can't be easily replicated by other compilers or tools
  • There is no straightforward way to differentiate between Rust and C++ symbols

If you've ever tried to use Rust with a debugger or a profiler and found it hard to work with because you couldn't work out which functions were which, it's probably because information was being lost in the mangling scheme.

Rust's compiler team started working on our own mangling scheme back in 2018 with RFC 2603 (see the "v0 Symbol Format" chapter in rustc book for our current documentation on the format). Our "v0" mangling scheme has multiple advantageous properties:

  • An unambiguous encoding for everything that can end up in a binary's symbol table
  • Information about generic parameters are encoded in a reversible way
  • Mangled symbols are decodable such that it should be possible to identify concrete instances of generic functions
  • It doesn't rely on compiler internals
  • Symbols are restricted to only A-Z, a-z, 0-9 and _, helping ensure compatibility with tools on varied platforms
  • It tries to stay efficient and avoid unnecessarily long names and computationally-expensive decoding

However, rustc is not the only tool that interacts with Rust symbol names: the aforementioned debuggers, profilers and other tools all need to be updated to understand Rust's v0 symbol mangling scheme so that Rust's users can continue to work with Rust binaries using all the tools they're used to without having to look at mangled symbols. Furthermore, all of those tools need to have new releases cut and then those releases need to be picked up by distros. This takes time!

Fortunately, the compiler team now believe that support for our v0 mangling scheme is now sufficiently widespread that it can start to be used by default by rustc.

Benefits

Reading Rust backtraces, or using Rust with debuggers, profilers and other tools that operate on compiled Rust code, will be able to output much more useful and readable names. This will especially help with async code, closures and generic functions.

It's easy to see the new mangling scheme in action, consider the following example:

fn foo<T>() {
    panic!()
}

fn main() {
    foo::<Vec<(String, &[u8; 123])>>();
}

With the legacy mangling scheme, all of the useful information about the generic instantiation of foo is lost in the symbol f::foo..

thread 'main' panicked at f.rs:2:5:
explicit panic
stack backtrace:
  0: std::panicking::begin_panic
    at /rustc/d6c...582/library/std/src/panicking.rs:769:5
  1: f::foo
  2: f::main
  3: core::ops::function::FnOnce::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

..but with the v0 mangling scheme, the useful details of the generic instantiation are preserved with f::foo::<alloc::vec::Vec<(alloc::string::String, &[u8; 123])>>:

thread 'main' panicked at f.rs:2:5:
explicit panic
stack backtrace:
  0: std::panicking::begin_panic
    at /rustc/d6c...582/library/std/src/panicking.rs:769:5
  1: f::foo::<alloc::vec::Vec<(alloc::string::String, &[u8; 123])>>
  2: f::main
  3: <fn() as core::ops::function::FnOnce<()>>::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

Possible drawbacks

Symbols using the v0 mangling scheme can be larger than symbols with the legacy mangling scheme, which can result in a slight increase in linking times and binary sizes if symbols aren't stripped (which they aren't by default). Fortunately this impact should be minor, especially with modern linkers like lld, which Rust will now default to on some targets.

Some old versions of tools/distros or niche tools that the compiler team are unaware of may not have had support for the v0 mangling scheme added. When using these tools, the only consequence is that users may encounter mangled symbols. rustfilt can be used to demangle Rust symbols if a tool does not.

In any case, using the new mangling scheme can be disabled if any problem occurs: use the -Csymbol-mangling-version=legacy -Zunstable-options flag to revert to using the legacy mangling scheme.

Explicitly enabling the legacy mangling scheme requires nightly, it is not intended to be stabilised so that support can eventually be removed.

Adding v0 support in your tools

If you maintain a tool that interacts with Rust symbols and does not support the v0 mangling scheme, there are Rust and C implementations of a v0 symbol demangler available in the rust-lang/rustc-demanglerepository that can be integrated into your project.

Summary

rustc will use our "v0" mangling scheme on nightly for all targets starting in tomorrow's rustup nightly (nightly-2025-11-21).

Let us know if you encounter problems, by opening an issue on GitHub.

If that happens, you can use the legacy mangling scheme with the -Csymbol-mangling-version=legacy -Zunstable-options flag. Either by adding it to the usual RUSTFLAGS environment variable, or to a project's .cargo/config.toml configuration file, like so:

[build]
rustflags = ["-Csymbol-mangling-version=legacy", "-Zunstable-options"]

If you like the sound of the new symbol mangling version and would like to start using it on stable or beta channels of Rust, then you can similarly use the -Csymbol-mangling-version=v0 flag today viaRUSTFLAGS or .cargo/config.toml:

[build]
rustflags = ["-Csymbol-mangling-version=v0"]

Rust Blog

Launching the 2025 State of Rust Survey

It’s time for the 2025 State of Rust Survey!

The Rust Project has been collecting valuable information about the Rust programming language community through our annual State of Rust Survey since 2016. Which means that this year marks the tenth edition of this survey!

We invite you to take this year’s survey whether you have just begun using Rust, you consider yourself an intermediate to advanced user, or you have not yet used Rust but intend to one day. The results will allow us to more deeply understand the global Rust community and how it evolves over time.

Like last year, the 2025 State of Rust Survey will likely take you between 10 and 25 minutes, and responses are anonymous. We will accept submissions until December 17. Trends and key insights will be shared on blog.rust-lang.org as soon as possible.

We are offering the State of Rust Survey in the following languages (if you speak multiple languages, please pick one). Language options are available on the main survey page:

  • English
  • Chinese (Simplified)
  • Chinese (Traditional)
  • French
  • German
  • Japanese
  • Ukrainian
  • Russian
  • Spanish
  • Portuguese (Brazil)

Note: the non-English translations of the survey are provided in a best-effort manner. If you find any issues with the translations, we would be glad if you could send us a pull request to improve the quality of the translations!

Please help us spread the word by sharing the survey link via your social media networks, at meetups, with colleagues, and in any other community that makes sense to you.

This survey would not be possible without the time, resources, and attention of the Rust Survey Team, the Rust Foundation, and other collaborators. We would also like to thank the following contributors who helped with translating the survey (in no particular order):

Thank you!

If you have any questions, please see our frequently asked questions.

We appreciate your participation!

Click here to read a summary of last year's survey findings.

By the way, the Rust Survey team is looking for new members. If you like working with data and coordinating people, and would like to help us out with managing various Rust surveys, please drop by our Zulip channel and say hi.

Continue Reading…

Rust Blog

Announcing Rust 1.91.1

The Rust team has published a new point release of Rust, 1.91.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.91.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.91.1

Rust 1.91.1 includes fixes for two regressions introduced in the 1.91.0 release.

Linker and runtime errors on Wasm

Most targets supported by Rust identify symbols by their name, but Wasm identifies them with a symbol name and a Wasm module name. The#[link(wasm_import_module)] attribute allows to customize the Wasm module name an extern block refers to:

#[link(wasm_import_module = "hello")]
extern "C" {
    pub fn world();
}

Rust 1.91.0 introduced a regression in the attribute, which could cause linker failures during compilation ("import module mismatch" errors) or the wrong function being used at runtime (leading to undefined behavior, including crashes and silent data corruption). This happened when the same symbol name was imported from two different Wasm modules across multiple Rust crates.

Rust 1.91.1 fixes the regression. More details are available in issue #148347.

Cargo target directory locking broken on illumos

Cargo relies on locking the target/ directory during a build to prevent concurrent invocations of Cargo from interfering with each other. Not all filesystems support locking (most notably some networked ones): if the OS returns the Unsupported error when attempting to lock, Cargo assumes locking is not supported and proceeds without it.

Cargo 1.91.0 switched from custom code interacting with the OS APIs to theFile::lock standard library method (recently stabilized in Rust 1.89.0). Due to an oversight, that method always returned Unsupported on the illumos target, causing Cargo to never lock the build directory on illumos regardless of whether the filesystem supported it.

Rust 1.91.1 fixes the oversight in the standard library by enabling theFile::lock family of functions on illumos, indirectly fixing the Cargo regression.

Contributors to 1.91.1

Many people came together to create Rust 1.91.1. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

Announcing Rust 1.91.0

The Rust team is happy to announce a new version of Rust, 1.91.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.91.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.91.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.91.0 stable

aarch64-pc-windows-msvc is now a Tier 1 platform

The Rust compiler supports a wide variety of targets, but the Rust Team can't provide the same level of support for all of them. To clearly mark how supported each target is, we use a tiering system:

  • Tier 3 targets are technically supported by the compiler, but we don't check whether their code build or passes the tests, and we don't provide any prebuilt binaries as part of our releases.
  • Tier 2 targets are guaranteed to build and we provide prebuilt binaries, but we don't execute the test suite on those platforms: the produced binaries might not work or might have bugs.
  • Tier 1 targets provide the highest support guarantee, and we run the full suite on those platforms for every change merged in the compiler. Prebuilt binaries are also available.

Rust 1.91.0 promotes the aarch64-pc-windows-msvc target to Tier 1 support, bringing our highest guarantees to users of 64-bit ARM systems running Windows.

Add lint against dangling raw pointers from local variables

While Rust's borrow checking prevents dangling references from being returned, it doesn't track raw pointers. With this release, we are adding a warn-by-default lint on raw pointers to local variables being returned from functions. For example, code like this:

fn f() -> *const u8 {
    let x = 0;
    &x
}

will now produce a lint:

warning: a dangling pointer will be produced because the local variable `x` will be dropped
 --> src/lib.rs:3:5
  |
1 | fn f() -> *const u8 {
  |           --------- return type of the function is `*const u8`
2 |     let x = 0;
  |         - `x` is part the function and will be dropped at the end of the function
3 |     &x
  |     ^^
  |
  = note: pointers do not have a lifetime; after returning, the `u8` will be deallocated
    at the end of the function because nothing is referencing it as far as the type system is
    concerned
  = note: `#[warn(dangling_pointers_from_locals)]` on by default

Note that the code above is not unsafe, as it itself doesn't perform any dangerous operations. Only dereferencing the raw pointer after the function returns would be unsafe. We expect future releases of Rust to add more functionality helping authors to safely interact with raw pointers, and with unsafe code more generally.

Stabilized APIs

These previously stable APIs are now stable in const contexts:

Platform Support

Refer to Rust’s platform support page for more information on Rust’s tiered platform support.

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.91.0

Many people came together to create Rust 1.91.0. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

Project goals for 2025H2

On Sep 9, we merged RFC 3849, declaring our goals for the "second half" of 2025H2 -- well, the last 3 months, at least, since "yours truly" ran a bit behind getting the goals program organized.

Flagship themes

In prior goals programs, we had a few major flagship goals, but since many of these goals were multi-year programs, it was hard to see what progress had been made. This time we decided to organize things a bit differently. We established four flagship themes, each of which covers a number of more specific goals. These themes cover the goals we expect to be the most impactful and constitute our major focus as a Project for the remainder of the year. The four themes identified in the RFC are as follows:

  • Beyond the &, making it possible to create user-defined smart pointers that are as ergonomic as Rust's built-in references &.
  • Unblocking dormant traits, extending the core capabilities of Rust's trait system to unblock long-desired features for language interop, lending iteration, and more.
  • Flexible, fast(er) compilation, making it faster to build Rust programs and improving support for specialized build scenarios like embedded usage and sanitizers.
  • Higher-level Rust, making higher-level usage patterns in Rust easier.

"Beyond the &"

| Goal | Point of contact | Team(s) and Champion(s) | | ---------------------------------------------------------- | -------------------- | ------------------------------------------------------------------ | | Reborrow traits | Aapo Alasuutari | compiler (Oliver Scherer), lang (Tyler Mandry) | | Design a language feature to solve Field Projections | Benno Lossin | lang (Tyler Mandry) | | Continue Experimentation with Pin Ergonomics | Frank King | compiler (Oliver Scherer), lang (TC) |

One of Rust's core value propositions is that it's a "library-based language"—libraries can build abstractions that feel built-in to the language even when they're not. Smart pointer types like Rc and Arc are prime examples, implemented purely in the standard library yet feeling like native language features. However, Rust's built-in reference types (&T and &mut T) have special capabilities that user-defined smart pointers cannot replicate. This creates a "second-class citizen" problem where custom pointer types can't provide the same ergonomic experience as built-in references.

The "Beyond the &" initiative aims to share the special capabilities of &, allowing library authors to create smart pointers that are truly indistinguishable from built-in references in terms of syntax and ergonomics. This will enable more ergonomic smart pointers for use in cross-language interop (e.g., references to objects in other languages like C++ or Python) and for low-level projects like Rust for Linux that use smart pointers to express particular data structures.

"Unblocking dormant traits"

| Goal | Point of contact | Team(s) and Champion(s) | | ---------------------------------------------- | ------------------- | ------------------------------------------------------------------------------------------------------------------- | | Evolving trait hierarchies | Taylor Cramer | compiler, lang (Taylor Cramer), libs-api, types (Oliver Scherer) | | In-place initialization | Alice Ryhl | lang (Taylor Cramer) | | Next-generation trait solver | lcnr | types (lcnr) | | Stabilizable Polonius support on nightly | Rémy Rakic | types (Jack Huey) | | SVE and SME on AArch64 | David Wood | compiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras), types |

Rust's trait system is one of its most powerful features, but it has a number of longstanding limitations that are preventing us from adopting new patterns. The goals in this category unblock a number of new capabilities:

  • Polonius will enable new borrowing patterns, and in particular unblock "lending iterators". Over the last few goal periods, we have identified an "alpha" version of Polonius that addresses the most important cases while being relatively simple and optimizable. Our goal for 2025H2 is to implement this algorithm in a form that is ready for stabilization in 2026.
  • The next-generation trait solver is a refactored trait solver that unblocks better support for numerous language features (implied bounds, negative impls, the list goes on) in addition to closing a number of existing bugs and sources of unsoundness. Over the last few goal periods, the trait solver went from being an early prototype to being in production use for coherence checking. The goal for 2025H2 is to prepare it for stabilization.
  • The work on evolving trait hierarchies will make it possible to refactor some parts of an existing trait into a new supertrait so they can be used on their own. This unblocks a number of features where the existing trait is insufficiently general, in particular stabilizing support for custom receiver types, a prior Project goal that wound up blocked on this refactoring. This will also make it safer to provide stable traits in the standard library while preserving the ability to evolve them in the future.
  • The work to expand Rust's Sized hierarchy will permit us to express types that are neither Sized nor ?Sized, such as extern types (which have no size) or Arm's Scalable Vector Extension (which have a size that is known at runtime but not at compilation time). This goal builds on RFC #3729 and RFC #3838, authored in previous Project goal periods.
  • In-place initialization allows creating structs and values that are tied to a particular place in memory. While useful directly for projects doing advanced C interop, it also unblocks expanding dyn Trait to support async fn and -> impl Trait methods, as compiling such methods requires the ability for the callee to return a future whose size is not known to the caller.

"Flexible, fast(er) compilation"

| Goal | Point of contact | Team(s) and Champion(s) | | ---------------------------------------- | ---------------------- | ---------------------------------------------------------------------------------------------------- | | build-std | David Wood | cargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras) | | Promoting Parallel Front End | Sparrow Li | compiler | | Production-ready cranelift backend | Folkert de Vries | compiler, wg-compiler-performance |

The "Flexible, fast(er) compilation" initiative focuses on improving Rust's build system to better serve both specialized use cases and everyday development workflows:

"Higher-level Rust"

| Goal | Point of contact | Team(s) and Champion(s) | | ------------------------------------------------------ | ------------------- | ------------------------------------------------------------------------------------------------------------------ | | Stabilize cargo-script | Ed Page | cargo (Ed Page), compiler, lang (Josh Triplett), lang-docs (Josh Triplett) | | Ergonomic ref-counting: RFC decision and preview | Niko Matsakis | compiler (Santiago Pastorino), lang (Niko Matsakis) |

People generally start using Rust for foundational use cases, where the requirements for performance or reliability make it an obvious choice. But once they get used to it, they often find themselves turning to Rust even for higher-level use cases, like scripting, web services, or even GUI applications. Rust is often "surprisingly tolerable" for these high-level use cases -- except for some specific pain points that, while they impact everyone using Rust, hit these use cases particularly hard. We plan two flagship goals this period in this area:

  • We aim to stabilize cargo script, a feature that allows single-file Rust programs that embed their dependencies, making it much easier to write small utilities, share code examples, and create reproducible bug reports without the overhead of full Cargo projects.
  • We aim to finalize the design of ergonomic ref-counting and to finalize the experimental impl feature so it is ready for beta testing. Ergonomic ref-counting makes it less cumbersome to work with ref-counted types like Rc and Arc, particularly in closures.

What to expect next

For the remainder of 2025 you can expect monthly blog posts covering the major progress on the Project goals.

Looking at the broader picture, we have now done three iterations of the goals program, and we want to judge how it should be run going forward. To start, Nandini Sharma from CMU has been conducting interviews with various Project members to help us see what's working with the goals program and what could be improved. We expect to spend some time discussing what we should do and to be launching the next iteration of the goals program next year. Whatever form that winds up taking, Tomas Sedovic, the Rust program manager hired by the Leadership Council, will join me in running the program.

Appendix: Full list of Project goals.

Read the full slate of Rust Project goals.

The full slate of Project goals is as follows. These goals all have identified points of contact who will drive the work forward as well as a viable work plan.

Invited goals. Some of the goals below are "invited goals", meaning that for that goal to happen we need someone to step up and serve as a point of contact. To find the invited goals, look for the "Help wanted" badge in the table below. Invited goals have reserved capacity for teams and a mentor, so if you are someone looking to help Rust progress, they are a great way to get involved.

| Goal | Point of contact | Team(s) and Champion(s) | | ----------------------------------------------------------------------------- | ----------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | | Develop the capabilities to keep the FLS up to date | Pete LeVasseur | bootstrap (Jakub Beránek), lang (Niko Matsakis), opsem, spec (Pete LeVasseur), types | | Getting Rust for Linux into stable Rust: compiler features | Tomas Sedovic | compiler (Wesley Wiser) | | Getting Rust for Linux into stable Rust: language features | Tomas Sedovic | lang (Josh Triplett), lang-docs (TC) | | Borrow checking in a-mir-formality | Niko Matsakis | types (Niko Matsakis) | | Reborrow traits | Aapo Alasuutari | compiler (Oliver Scherer), lang (Tyler Mandry) | | build-std | David Wood | cargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras) | | Prototype Cargo build analysis | Weihang Lo | cargo (Weihang Lo) | | Rework Cargo Build Dir Layout | Ross Sullivan | cargo (Weihang Lo) | | Prototype a new set of Cargo "plumbing" commands | Help Wanted | cargo | | Stabilize cargo-script | Ed Page | cargo (Ed Page), compiler, lang (Josh Triplett), lang-docs (Josh Triplett) | | Continue resolving cargo-semver-checks blockers for merging into cargo | Predrag Gruevski | cargo (Ed Page), rustdoc (Alona Enraght-Moony) | | Emit Retags in Codegen | Ian McCormack | compiler (Ralf Jung), opsem (Ralf Jung) | | Comprehensive niche checks for Rust | Bastian Kersting | compiler (Ben Kimock), opsem (Ben Kimock) | | Const Generics | Boxy | lang (Niko Matsakis) | | Ergonomic ref-counting: RFC decision and preview | Niko Matsakis | compiler (Santiago Pastorino), lang (Niko Matsakis) | | Evolving trait hierarchies | Taylor Cramer | compiler, lang (Taylor Cramer), libs-api, types (Oliver Scherer) | | Design a language feature to solve Field Projections | Benno Lossin | lang (Tyler Mandry) | | Finish the std::offload module | Manuel Drehwald | compiler (Manuel Drehwald), lang (TC) | | Run more tests for GCC backend in the Rust's CI | Guillaume Gomez | compiler (Wesley Wiser), infra (Marco Ieni) | | In-place initialization | Alice Ryhl | lang (Taylor Cramer) | | C++/Rust Interop Problem Space Mapping | Jon Bauman | compiler (Oliver Scherer), lang (Tyler Mandry), libs (David Tolnay), opsem | | Finish the libtest json output experiment | Ed Page | cargo (Ed Page), libs-api, testing-devex | | MIR move elimination | Amanieu d'Antras | compiler, lang (Amanieu d'Antras), opsem, wg-mir-opt | | Next-generation trait solver | lcnr | types (lcnr) | | Implement Open API Namespace Support | Help Wanted | cargo (Ed Page), compiler (b-naber), crates-io (Carol Nichols) | | Promoting Parallel Front End | Sparrow Li | compiler | | Continue Experimentation with Pin Ergonomics | Frank King | compiler (Oliver Scherer), lang (TC) | | Stabilizable Polonius support on nightly | Rémy Rakic | types (Jack Huey) | | Production-ready cranelift backend | Folkert de Vries | compiler, wg-compiler-performance | | Stabilize public/private dependencies | Help Wanted | cargo (Ed Page), compiler | | Expand the Rust Reference to specify more aspects of the Rust language | Josh Triplett | lang-docs (Josh Triplett), spec (Josh Triplett) | | reflection and comptime | Oliver Scherer | compiler (Oliver Scherer), lang (Scott McMurray), libs (Josh Triplett) | | Relink don't Rebuild | Jane Lusby | cargo, compiler | | Rust Vision Document | Niko Matsakis | leadership-council | | rustc-perf improvements | James | compiler, infra | | Stabilize rustdoc doc_cfg feature | Guillaume Gomez | rustdoc (Guillaume Gomez) | | Add a team charter for rustdoc team | Guillaume Gomez | rustdoc (Guillaume Gomez) | | SVE and SME on AArch64 | David Wood | compiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras), types | | Rust Stabilization of MemorySanitizer and ThreadSanitizer Support | Jakob Koschel | bootstrap, compiler, infra, project-exploit-mitigations | | Type System Documentation | Boxy | types (Boxy) | | Unsafe Fields | Jack Wrenn | compiler (Jack Wrenn), lang (Scott McMurray) |

Continue Reading…

Rust Blog

docs.rs: changed default targets

Changes to default build targets on docs.rs

This post announces two changes to the list of default targets used to build documentation on docs.rs.

Crate authors can specify a custom list of targets usingdocs.rs metadata in Cargo.toml. If this metadata is not provided, docs.rs falls back to a default list. We are updating this list to better reflect the current state of the Rust ecosystem.

Apple silicon (ARM64) replaces x86_64

Reflecting Apple's transition from x86_64 to its own ARM64 silicon, the Rust project has updated its platform support tiers. The aarch64-apple-darwintarget is now Tier 1, while x86_64-apple-darwin has moved to Tier 2. You can read more about this in RFC 3671and RFC 3841.

To align with this, docs.rs will now use aarch64-apple-darwin as the default target for Apple platforms instead of x86_64-apple-darwin.

Linux ARM64 replaces 32-bit x86

Support for 32-bit i686 architectures is declining, and major Linux distributions have begun to phase it out.

Consequently, we are replacing the i686-unknown-linux-gnu target withaarch64-unknown-linux-gnu in our default set.

New default target list

The updated list of default targets is:

  • x86_64-unknown-linux-gnu
  • aarch64-apple-darwin (replaces x86_64-apple-darwin)
  • x86_64-pc-windows-msvc
  • aarch64-unknown-linux-gnu (replaces i686-unknown-linux-gnu)
  • i686-pc-windows-msvc

Opting out

If your crate requires the previous default target list, you can explicitly define it in your Cargo.toml:

[package.metadata.docs.rs]
targets = [
    "x86_64-unknown-linux-gnu",
    "x86_64-apple-darwin",
    "x86_64-pc-windows-msvc",
    "i686-unknown-linux-gnu",
    "i686-pc-windows-msvc"
]

Note that docs.rs continues to support any target available in the Rust toolchain; only the default list has changed.

Continue Reading…

Rust Blog

Announcing the New Rust Project Directors

We are happy to announce that we have completed the annual process to elect new Project Directors.

The new Project Directors are:

They will join Ryan Levick and Carol Nichols to make up the five members of the Rust Foundation Board of Directors who represent the Rust Project.

We would also like to thank the outgoing going Project Directors for contributions and service:

The board is made up of Project Directors, who come from and represent the Rust Project, and Member Directors, who represent the corporate members of the Rust Foundation. Both of these director groups have equal voting power.

We look forward to working with and being represented by this new group of project directors.

We were fortunate to have a number of excellent candidates and this was a difficult decision. We wish to express our gratitude to all of the candidates who were considered for this role! We also extend our thanks to the project as a whole who participated by nominating candidates and providing additional feedback once the nominees were published.

Finally, we want to share our appreciation for Tomas Sedovic for facilitating the election process. An overview of the election process can be found in a previous blog post here.

Continue Reading…

Rust Blog

crates.io: Malicious crates faster_log and async_println

Summary

On September 24th, the crates.io team was notified by Kirill Boychenko from the Socket Threat Research Team of two malicious crates which were actively searching file contents for Etherum private keys, Solana private keys, and arbitrary byte arrays for exfiltration.

These crates were:

  • faster_log - Published on May 25th, 2025, downloaded 7181 times
  • async_println - Published on May 25th, 2025, downloaded 1243 times

The malicious code was executed at runtime, when running or testing a project depending on them. Notably, they did not execute any malicious code at build time. Except for their malicious payload, these crates copied the source code, features, and documentation of legitimate crates, using a similiar name to them (a case of typosquatting1).

Actions taken

The users in question were immediately disabled, and the crates in question were deleted2 from crates.io shortly after. We have retained copies of all logs associated with the users and the malicious crate files for further analysis.

The deletion was performed at 15:34 UTC on September 24, 2025.

Analysis

Both crates were copies of a crate which provided logging functionality, and the logging implementation remained functional in the malicious crates. The original crate had a feature which performed log file packing, which iterated over an associated directories files.

The attacker inserted code to perform the malicious action during a log packing operation, which searched the log files being processed from that directory for:

  • Quoted Ethereum private keys (0x + 64 hex)
  • Solana-style Base58 secrets
  • Bracketed byte arrays

The crates then proceeded to exfiltrate the results of this search to https://mainnet[.]solana-rpc-pool[.]workers[.]dev/.

These crates had no dependent downstream crates on crates.io.

The malicious users associated with these crates had no other crates or publishes, and the team is actively investigating associative actions in our retained3 logs.

Thanks

Our thanks to Kirill Boychenko from the Socket Threat Research Team for reporting the crates. We also want to thank Carol Nichols from the crates.io team, Pietro Albini from the Rust Security Response WG and Walter Pearce from the Rust Foundation for aiding in the response.

  1. typosquatting is a technique used by bad actors to initiate dependency confusion attacks where a legitimate user might be tricked into using a malicious dependency instead of their intended dependency — for example, a bad actor might try to publish a crate at proc-macro3 to catch users of the legitimate proc-macro2 crate.
  2. The crates were preserved for future analysis should there be other attacks, and to inform scanning efforts in the future.
  3. One year of logs are retained on crates.io, but only 30 days are immediately available on our log platform. We chose not to go further back in our analysis, since IP address based analysis is limited by the use of dynamic IP addresses in the wild, and the relevant IP address being part of an allocation to a residential ISP.

Continue Reading…

Rust Blog

Announcing Rust 1.90.0

The Rust team is happy to announce a new version of Rust, 1.90.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.90.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.90.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.90.0 stable

LLD is now the default linker on x86_64-unknown-linux-gnu

The x86_64-unknown-linux-gnu target will now use the LLD linker for linking Rust crates by default. This should result in improved linking performance vs the default Linux linker (BFD), particularly for large binaries, binaries with a lot of debug information, and for incremental rebuilds.

In the vast majority of cases, LLD should be backwards compatible with BFD, and you should not see any difference other than reduced compilation time. However, if you do run into any new linker issues, you can always opt out using the -C linker-features=-lld compiler flag. Either by adding it to the usual RUSTFLAGS environment variable, or to a project's .cargo/config.toml configuration file, like so:

[target.x86_64-unknown-linux-gnu]
rustflags = ["-Clinker-features=-lld"]

If you encounter any issues with the LLD linker, please let us know. You can read more about the switch to LLD, some benchmark numbers and the opt out mechanism here.

Cargo adds native support for workspace publishing

cargo publish --workspace is now supported, automatically publishing all of the crates in a workspace in the right order (following any dependencies between them).

This has long been possible with external tooling or manual ordering of individual publishes, but this brings the functionality into Cargo itself.

Native integration allows Cargo's publish verification to run a build across the full set of to-be-published crates as if they were published, including during dry-runs. Note that publishes are still not atomic -- network errors or server-side failures can still lead to a partially published workspace.

Demoting x86_64-apple-darwin to Tier 2 with host tools

GitHub will soon discontinue providing free macOS x86_64 runners for public repositories. Apple has also announced their plans for discontinuing support for the x86_64 architecture.

In accordance with these changes, as of Rust 1.90, we have demoted the x86_64-apple-darwin target from Tier 1 with host tools to Tier 2 with host tools. This means that the target, including tools like rustc and cargo, will be guaranteed to build but is not guaranteed to pass our automated test suite.

For users, this change will not immediately cause impact. Builds of both the standard library and the compiler will still be distributed by the Rust Project for use via rustup or alternative installation methods while the target remains at Tier 2. Over time, it's likely that reduced test coverage for this target will cause things to break or fall out of compatibility with no further announcements.

Stabilized APIs

These previously stable APIs are now stable in const contexts:

Platform Support

  • x86_64-apple-darwin is now a tier 2 target

Refer to Rust’s platform support page for more information on Rust’s tiered platform support.

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.90.0

Many people came together to create Rust 1.90.0. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

crates.io phishing campaign

We received multiple reports of a phishing campaign targeting crates.io users (from the rustfoundation.dev domain name), mentioning a compromise of our infrastructure and asking users to authenticate to limit damage to their crates.

These emails are malicious and come from a domain name not controlled by the Rust Foundation (nor the Rust Project), seemingly with the purpose of stealing your GitHub credentials. We have no evidence of a compromise of the crates.io infrastructure.

We are taking steps to get the domain name taken down and to monitor for suspicious activity on crates.io. Do not follow any links in these emails if you receive them, and mark them as phishing with your email provider.

If you have any further questions please reach out to security@rust-lang.organd help@crates.io.

Continue Reading…

Rust Blog

Rust compiler performance survey 2025 results

Two months ago, we launched the first Rust Compiler Performance Survey, with the goal of helping us understand the biggest pain points of Rust developers related to build performance. It is clear that this topic is very important for the Rust community, as the survey received over 3 700 responses! We would like to thank everyone who participated in the survey, and especially those who described their workflows and challenges with an open answer. We plan to run this survey annually, so that we can observe long-term trends in Rust build performance and its perception.

In this post, we'll show some interesting results and insights that we got from the survey and promote work that we have already done recently or that we plan to do to improve the build performance of Rust code. If you would like to examine the complete results of the survey, you can find them here.

And now strap in, as there is a lot of data to explore! As this post is relatively long, here is an index of topics that it covers:

Overall satisfaction

To understand the overall sentiment, we asked our respondents to rate their satisfaction with their build performance, on a scale from 0 (worst) to 10 (best). The average rating was 6, with most people rating their experience with 7 out of 10:

[PNG] [SVG]

To help us understand the overall build experience in more detail, we also analyzed all open answers (over a thousand of them) written by our respondents, to help us identify several recurring themes, which we will discuss in this post.

One thing that is clear from both the satisfaction rating and the open answers is that the build experience differs wildly across users and workflows, and it is not as clear-cut as "Rust builds are slow". We actually received many positive comments about users being happy with Rust build performance, and appreciation for it being improved vastly over the past several years to the point where it stopped being a problem.

People also liked to compare their experience with other competing technologies. For example, many people wrote that the build performance of Rust is not worse, or is even better, than what they saw with C++. On the other hand, others noted that the build performance of languages such as Go or Zig is much better than that of Rust.

While it is great to see some developers being happy with the state we have today, it is clear that many people are not so lucky, and Rust's build performance limits their productivity. Around 45% respondents who answered that they are no longer using Rust said that at least one of the reasons why they stopped were long compile times.

In our survey we received a lot of feedback pointing out real issues and challenges in several areas of build performance, which is what we will focus on in this post.

Important workflows

The challenges that Rust developers experience with build performance are not always as simple as the compiler itself being slow. There are many diverse workflows with competing trade-offs, and optimizing build performance for them might require completely different solutions. Some approaches for improving build performance can also be quite unintuitive. For example, stabilizing certain language features could help remove the need for certain build scripts or proc macros, and thus speed up compilation across the Rust ecosystem. You can watch this talk from RustWeek about build performance to learn more.

It is difficult to enumerate all possible build workflows, but we at least tried to ask about workflows that we assumed are common and could limit the productivity of Rust developers the most:

[PNG] [SVG]

We can see that all the workflows that we asked about cause significant problems to at least a fraction of the respondents, but some of them more so than others. To gain more information about the specific problems that developers face, we also asked a more detailed, follow-up question:

[PNG] [SVG]

Based on the answers to these two questions and other experiences shared in the open answers, we identified three groups of workflows that we will discuss next:

  • Incremental rebuilds after making a small change
  • Type checking using cargo check or with a code editor
  • Clean, from-scratch builds, including CI builds

Incremental rebuilds

Waiting too long for an incremental rebuild after making a small source code change was by far the most common complaint in the open answers that we received, and it was also the most common problem that respondents said they struggle with. Based on our respondents' answers, this comes down to three main bottlenecks:

  • Changes in workspaces trigger unnecessary rebuilds. If you modify a crate in a workspace that has several dependent crates and perform a rebuild, all those dependent crates will currently have to be recompiled. This can cause a lot of unnecessary work and dramatically increase the latency of rebuilds in large (or deep) workspaces. We have some ideas about how to improve this workflow, such as the "Relink, don't rebuild" proposal, but these are currently in a very experimental stage.
  • The linking phase is too slow. This was a very common complaint, and it is indeed a real issue, because unlike the rest of the compilation process, linking is always performed "from scratch". The Rust compiler usually delegates linking to an external/system linker, so its performance is not completely within our hands. However, we are attempting to switch to faster linkers by default. For example, the most popular target (x86_64-unknown-linux-gnu) will very soon switch to the LLD linker, which provides significant performance wins. Long-term, it is possible that some linkers (e.g. wild) will allow us to perform even linking incrementally.
  • Incremental rebuild of a single crate is too slow. The performance of this workflow depends on the cleverness of the incremental engine of the Rust compiler. While it is already very sophisticated, there are some parts of the compilation process that are not incremental yet or that are not cached in an optimal way. For example, expansion of derive proc macros is not currently cached, although work is underway to change that.

Several users have mentioned that they would like to see Rust perform hot-patching (such as the subsecond system used by the Dioxus UI framework or similar approaches used e.g. by the Bevy game engine). While these hot-patching systems are very exciting and can produce truly near-instant rebuild times for specialized use-cases, it should be noted that they also come with many limitations and edge-cases, and it does not seem that a solution that would allow hot-patching to work in a robust way has been found yet.

To gauge how long is the typical rebuild latency, we asked our respondents to pick a single Rust project that they work on and which causes them to struggle with build times the most, and tell us how long they have to wait for it to be rebuilt after making a code change.

[PNG] [SVG]

Even though many developers do not actually experience this latency after each code change, as they consume results of type checking or inline annotations in their code editor, the fact that 55% of respondents have to wait more than ten seconds for a rebuild is far from ideal.

If we partition these results based on answers to other questions, it is clear that the rebuild times depend a lot on the size of the project:

[PNG] [SVG]

And to a lesser factor also on the number of used dependencies:

[PNG] [SVG]

We would love to get to a point where the time needed to rebuild a Rust project is dependent primarily on the amount of performed code changes, rather than on the size of the codebase, but clearly we are not there yet.

Type checking and IDE performance

Approximately 60% of respondents say that they use cargo terminal commands to type check, build or test their code, with cargo check being the most commonly used command performed after each code change:

[PNG] [SVG]

While the performance of cargo check does not seem to be as big of a blocker as e.g. incremental rebuilds, it also causes some pain points. One of the most common ones present in the survey responses is the fact that cargo check does not share the build cache with cargo build. This causes additional compilation to happen when you run e.g. cargo check several times to find all type errors, and when it succeeds, you follow up with cargo build to actually produce a built artifact. This workflow is an example of competing trade-offs, because sharing the build cache between these two commands by unifying them more would likely make cargo check itself slightly slower, which might be undesirable to some users. It is possible that we might be able to find some middle ground to improve the status quo though. You can follow updates to this work in this issue.

A related aspect is the latency of type checking in code editors and IDEs. Around 87% of respondents say that they use inline annotations in their editor as the primary mechanism of inspecting compiler errors, and around 33% of them consider waiting for these annotations to be a big blocker. In the open answers, we also received many reports of Rust Analyzer's performance and memory usage being a limiting factor.

The maintainers of Rust Analyzer are working hard on improving its performance. Its caching system is being improved to reduce analysis latency, the distributed builds of the editor are now optimized with PGO, which provided 15-20% performance wins, and work is underway to integrate the compiler's new trait solver into Rust Analyzer, which could eventually also result in increased performance.

More than 35% users said that they consider the IDE and Cargo blocking one another to be a big problem. There is an existing workaround for this, where you can configure Rust Analyzer to use a different target directory than Cargo, at the cost of increased disk space usage. We realized that this workaround has not been documented in a very visible way, so we added it to the FAQ section of the Rust Analyzer book.

Clean and CI builds

Around 20% of participants responded that clean builds are a significant blocker for them. In order to improve their performance, you can try a recently introduced experimental Cargo and compiler option called hint-mostly-unused, which can in certain situations help improve the performance of clean builds, particularly if your dependencies contain a lot of code that might not actually be used by your crate(s).

One area where clean builds might happen often is Continuous Integration (CI). 1495 respondents said that they use CI to build Rust code, and around 25% of them consider its performance to be a big blocker for them. However, almost 36% of respondents who consider CI build performance to be a big issue said that they do not use any caching in CI, which we found surprising. One explanation might be that the generated artifacts (the target directory) is too large for effective caching, and runs into usage limits of CI providers, which is something that we saw mentioned repeatedly in the open answers section. We have recently introduced an experimental Cargo and compiler option called -Zembed-metadata that is designed to reduce the size of the target directories, and work is also underway to regularly garbage collect them. This might help with the disk space usage issue somewhat in the future.

One additional way to significantly reduce disk usage is to reduce the amount of generated debug information, which brings us to the next section.

Debug information

The default Cargo dev profile generates full debug information (debuginfo) both for workspace crates and also all dependencies. This enables stepping through code with a debugger, but it also increases disk usage of the target directory, and crucially it makes compilation and linking slower. This effect can be quite large, as our benchmarks show a possible improvement of 2-30% in cycle counts if we reduce the debuginfo level to line-tables-only (which only generates enough debuginfo for backtraces to work), and the improvements are even larger if we disable debuginfo generation completely1.

However, if Rust developers debug their code after most builds, then this cost might be justified. We thus asked them how often they use a debugger to debug their Rust code:

[PNG] [SVG]

Based on these results, it seems that the respondents of our survey do not actually use a debugger all that much2.

However, when we asked people if they require debuginfo to be generated by default, the responses were much less clear-cut:

[PNG] [SVG]

This is the problem with changing defaults: it is challenging to improve the workflows of one user without regressing the workflow of another. For completeness, here are the answers to the previous question partitioned on the answer to the "How often do you use a debugger" question:

[PNG] [SVG]

It was surprising for us to see that around a quarter of respondents who (almost) never use a debugger still want to have full debuginfo generated by default.

Of course, you can always disable debuginfo manually to improve your build performance, but not everyone knows about that option, and defaults matter a lot. The Cargo team is considering ways of changing the status quo, for example by reducing the level of generated debug information in the dev profile, and introducing a new built-in profile designed for debugging.

Workarounds for improving build performance

Build performance of Rust is affected by many different aspects, including the configuration of the build system (usually Cargo) and the Rust compiler, but also the organization of Rust crates and used source code patterns. There are thus several approaches that can be used to improve build performance by either using different configuration options or restructuring source code. We asked our respondents if they are even aware of such possibilities, whether they have tried them and how effective they were:

[PNG] [SVG]

It seems that the most popular (and effective) mechanisms for improving build performance are reducing the number of dependencies and their activated features, and splitting larger crates into smaller crates. The most common way of improving build performance without making source code changes seems to be the usage of an alternative linker. It seems that especially the mold and LLD linkers are very popular:

[PNG] [SVG] [Wordcloud of open answers]

We have good news here! The most popular x86_64-unknown-linux-gnu Linux target will start using the LLD linker in the next Rust stable release, resulting in faster link times by default. Over time, we will be able to evaluate how disruptive is this change to the overall Rust ecosystem, and whether we could e.g. switch to a different (even faster) linker.

Build performance guide

We were surprised by the relatively large number of users who were unaware of some approaches for improving compilation times, in particular those that are very easy to try and typically do not require source code changes (such as reducing debuginfo or using a different linker or a codegen backend). Furthermore, almost 42% of respondents have not tried to use any mechanism for improving build performance whatsoever. While this is not totally unexpected, as some of these mechanisms require using the nightly toolchain or making non-trivial changes to source code, we think that one the reasons is also simply that Rust developers might not know about these mechanisms being available. In the open answers, several people also noted that they would appreciate if there was some sort of official guidance from the Rust Project about such mechanisms for improving compile times.

It should be noted that the mechanisms that we asked about are in fact workarounds that present various trade-offs, and these should always be carefully considered. Several people have expressed dissatisfaction with some of these workarounds in the open answers, as they find it unacceptable to modify their code (which could sometimes result e.g. in increased maintenance costs or worse runtime performance) just to achieve reasonable compile times. Nevertheless, these workarounds can still be incredibly useful in some cases.

The feedback that we received shows that it might be beneficial to spread awareness of these mechanisms in the Rust community more, as some of them can make a really large difference in build performance, but also to candidly explain the trade-offs that they introduce. Even though several great resources that cover this topic already exist online, we decided to create an official guide for optimizing build performance (currently work-in-progress), which will likely be hosted in the Cargo book. The aim of this guide is to increase the awareness of various mechanisms for improving build performance, and also provide a framework for evaluating their trade-offs.

Our long-standing goal is to make compilation so fast that similar workarounds will not be necessary anymore for the vast majority of use-cases. However, there is no free lunch, and the combination of Rust's strong type system guarantees, its compilation model and also heavy focus on runtime performance often go against very fast (re)build performance, and might require usage of at least some workarounds. We hope that this guide will help Rust developers learn about them and evaluate them for their specific use-case.

Understanding why builds are slow

When Rust developers experience slow builds, it can be challenging to identify where exactly is the compilation process spending time, and what could be the bottleneck. It seems that only very few Rust developers leverage tools for profiling their builds:

[PNG] [SVG]

This hardly comes as a surprise. There are currently not that many ways of intuitively understanding the performance characteristics of Cargo and rustc. Some tools offer only a limited amount of information (e.g. cargo build --timings), and the output of others (e.g. -Zself-profile) is very hard to interpret without knowledge of the compiler internals.

To slightly improve this situation, we have recently added support for displaying link times to the cargo build --timings output, to provide more information about the possible bottleneck in crate compilation (note this feature has not been stabilized yet).

Long-term, it would be great to have tooling that could help Rust developers diagnose compilation bottlenecks in their crates without them having to understand how the compiler works. For example, it could help answer questions such as "Which code had to be recompiled after a given source change" or "Which (proc) macros take the longest time to expand or produce the largest output", and ideally even offer some actionable suggestions. We plan to work on such tooling, but it will take time to manifest.

One approach that could help Rust compiler contributors understand why are Rust (re)builds slow "in the wild" is the opt-in compilation metrics collection initiative.

What's next

There are more interesting things in the survey results, for example how do answers to selected questions differ based on the used operating system. You can examine the full results in the full report PDF.

We would like to thank once more everyone who has participated in our survey. It helped us understand which workflows are the most painful for Rust developers, and especially the open answers provided several great suggestions that we tried to act upon.

Even though the Rust compiler is getting increasingly faster every year, we understand that many Rust developers require truly significant improvements to improve their productivity, rather than "just" incremental performance wins. Our goal for the future is to finally stabilize long-standing initiatives that could improve build performance a lot, such as the Cranelift codegen backend or the parallel compiler frontend. One such initiative (using a faster linker by default) will finally land soon, but the fact that it took many years shows how difficult it is to make such large cutting changes to the compilation process.

There are other ambitious ideas for reducing (re)build times, such as avoiding unnecessary workspace rebuilds or e.g. using some form of incremental linking, but these will require a lot of work and design discussions.

We know that some people are wondering why it takes so much time to achieve progress in improving the build performance of Rust. The answer is relatively simple. These changes require a lot of work, domain knowledge (that takes a relatively long time to acquire) and many discussions and code reviews, and the pool of people that have time and motivation to work on them or review these changes is very limited. Current compiler maintainers and contributors (many of whom work on the compiler as volunteers, without any funding) work very hard to keep up with maintaining the compiler and keeping it working with the high-quality bar that Rust developers expect, across many targets, platforms and operating systems. Introducing large structural changes, which are likely needed to reach massive performance improvements, would require a lot of concentrated effort and funding.

  1. This benchmark was already performed using the fast LLD linker. If a slower linker was used, the build time wins would likely be even larger.
  2. Potentially because of the strong invariants upheld by the Rust type system, and partly also because the Rust debugging experience might not be optimal for many users, which is a feedback that we received in the State of Rust 2024 survey.
Rust Blog

Welcoming the Rust Innovation Lab

TL;DR: Rustls is the inaugural project of the Rust Innovation Lab, which is a new home for Rust projects under the Rust Foundation.

At the Rust Foundation's August meeting, the Project Directors and the rest of the Rust Foundation board voted to approve Rustls as the first project housed under the newly formed Rust Innovation Lab. Prior to the vote, the Project Directors consulted with the Leadership Council who confirmed the Project's support for this initiative.

The Rust Innovation Lab (RIL) is designed to provide support for funded Rust-based open source projects from the Rust Foundation in the form of governance, legal, networking, marketing, and administration, while keeping the technical direction solely in the hands of the current maintainers. As with the other work of the Rust Foundation (e.g. its many existing initiatives), the purpose of the RIL is to strengthen the Rust ecosystem generally.

The Foundation has been working behind the scenes to establish the Rust Innovation Lab, which includes setting up infrastructure under the Foundation to ensure smooth transition for Rustls into RIL. More details are available in the Foundation's announcement and on the Rust Innovation Lab's page.

We are all excited by the formation of the Rust Innovation Lab. The support this initiative will provide to Rustls (and, eventually, other important projects that are using Rust) will improve software security for the entire industry. The Rust Project is grateful for the support of the Rust Foundation corporate members who are making this initiative possible for the benefit of everyone.

More information on the criteria for projects wishing to become part of the RIL and the process for applying will be coming soon. The Project Directors and Leadership Council have been and will continue working with the Foundation to communicate information, questions, and feedback with the Rust community about the RIL as the details are worked out.

Continue Reading…

Rust Blog

Faster linking times with 1.90.0 stable on Linux using the LLD linker

TL;DR: rustc will start using the LLD linker by default on the x86_64-unknown-linux-gnu target starting with the next stable release (1.90.0, scheduled for 2025-09-18), which should significantly reduce linking times. Test it out on beta now, and please report any encountered issues.

Some context

Linking time is often a big part of compilation time. When rustc needs to build a binary or a shared library, it will usually call the default linker installed on the system to do that (this can be changed on the command-line or by the target for which the code is compiled).

The linkers do an important job, with concerns about stability, backwards-compatibility and so on. For these and other reasons, on the most popular operating systems they usually are older programs, designed when computers only had a single core. So, they usually tend to be slow on a modern machine. For example, when building ripgrep 13 in debug mode on Linux, roughly half of the time is actually spent in the linker.

There are different linkers, however, and the usual advice to improve linking times is to use one of these newer and faster linkers, like LLVM's lld or Rui Ueyama's mold.

Some of Rust's wasm and aarch64 targets already use lld by default. When using rustup, rustc ships with a version of lld for this purpose. When CI builds LLVM to use in the compiler, it also builds the linker and packages it. It's referred to as rust-lld to avoid colliding with any lld already installed on the user's machine.

Since improvements to linking times are substantial, it would be a good default to use in the most popular targets. This has been discussed for a long time, for example in issues #39915 and #71515.

To expand our testing, we have enabled rustc to use rust-lld by default on nightly, in May 2024. No major issues have been reported since then.

We believe we've done all the internal testing that we could, on CI, crater, on our benchmarking infrastructure and on nightly, and plan to enable rust-lld to be the linker used by default on x86_64-unknown-linux-gnu for stable builds in 1.90.0.

Benefits

While this also enables the compiler to use more linker features in the future, the most immediate benefit is much improved linking times.

Here are more details from the ripgrep example mentioned above: for an incremental rebuild, linking is reduced 7x, resulting in a 40% reduction in end-to-end compilation times. For a from-scratch debug build, it is a 20% improvement.

Before/after comparison of a ripgrep incremental debug build

Most binaries should see some improvements here, but it's especially significant with e.g. bigger binaries, or for incremental rebuilds, or when involving debuginfo. These usually see bottlenecks in the linker.

Here's a link to the complete results from our benchmarks.

Possible drawbacks

From our prior testing, we don't really expect issues to happen in practice. It is a drop-in replacement for the vast majority of cases, but lld is not bug-for-bug compatible with GNU ld.

In any case, using rust-lld can be disabled if any problem occurs: use the -C linker-features=-lld flag to revert to using the system's default linker.

Some crates somehow relying on these differences could need additional link args, though we also expect this to be quite rare. Let us know if you encounter problems, by opening an issue on GitHub.

Some of the big gains in performance come from parallelism, which could be undesirable in resource-constrained environments, or for heavy projects that are already reaching hardware limits.

Summary, and call for testing

rustc will use rust-lld on x86_64-unknown-linux-gnu, starting with the 1.90.0 stable release, for much improved linking times. Rust 1.90.0 will be released next month, on the 18th of September 2025.

This linker change is already available on the current beta (1.90.0-beta.6). To help everyone prepare for this landing on stable, please test your projects on beta and let us know if you encounter problems, by opening an issue on GitHub.

If that happens, you can revert to the default linker with the -C linker-features=-lld flag. Either by adding it to the usual RUSTFLAGS environment variable, or to a project's .cargo/config.toml configuration file, like so:

[target.x86_64-unknown-linux-gnu]
rustflags = ["-Clinker-features=-lld"]

Rust Blog

Demoting x86_64-apple-darwin to Tier 2 with host tools

In Rust 1.90.0, the target x86_64-apple-darwin will be demoted to Tier 2 with host tools. The standard library and the compiler will continue to be built and distributed, but automated tests of these components are no longer guaranteed to be run.

Background

Rust has supported macOS for a long time, with some amount of support dating back to Rust 0.1 and likely before that. During that time period, Apple has changed CPU architectures from x86 to x86_64 and now to Apple silicon, ultimately announcing the end of support for the x86_64 architecture.

Similarly,GitHub has announced that they will no longer provide free macOS x86_64 runners for public repositories. The Rust Project uses these runners to execute automated tests for the x86_64-apple-darwin target. Since the target tier policy requires that Tier 1 platforms must run tests in CI, the x86_64-apple-darwin target must be demoted to Tier 2.

What changes?

Starting with Rust 1.90.0, x86_64-apple-darwin will be Tier 2 with host tools. For users, nothing will change immediately; builds of both the standard library and the compiler will still be distributed by the Rust Project for use via rustup or alternative installation methods.

Over time, this target will likely accumulate bugs faster due to reduced testing.

Future

If the x86_64-apple-darwin target causes concrete problems, it may be demoted further. No plans for further demotion have been made yet.

For more details on the motivation of the demotion, see RFC 3841.

Continue Reading…

Rust Blog

Announcing Rust 1.89.0

The Rust team is happy to announce a new version of Rust, 1.89.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.89.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.89.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.89.0 stable

Explicitly inferred arguments to const generics

Rust now supports _ as an argument to const generic parameters, inferring the value from surrounding context:

pub fn all_false<const LEN: usize>() -> [bool; LEN] {
  [false; _]
}

Similar to the rules for when _ is permitted as a type, _ is not permitted as an argument to const generics when in a signature:

// This is not allowed
pub const fn all_false<const LEN: usize>() -> [bool; _] {
  [false; LEN]
}

// Neither is this
pub const ALL_FALSE: [bool; _] = all_false::<10>();

Mismatched lifetime syntaxes lint

Lifetime elision in function signatures is an ergonomic aspect of the Rust language, but it can also be a stumbling point for newcomers and experts alike. This is especially true when lifetimes are inferred in types where it isn't syntactically obvious that a lifetime is even present:

// The returned type `std::slice::Iter` has a lifetime, 
// but there's no visual indication of that.
//
// Lifetime elision infers the lifetime of the return 
// type to be the same as that of `scores`.
fn items(scores: &[u8]) -> std::slice::Iter<u8> {
   scores.iter()
}

Code like this will now produce a warning by default:

warning: hiding a lifetime that's elided elsewhere is confusing
 --> src/lib.rs:1:18
  |
1 | fn items(scores: &[u8]) -> std::slice::Iter<u8> {
  |                  ^^^^^     -------------------- the same lifetime is hidden here
  |                  |
  |                  the lifetime is elided here
  |
  = help: the same lifetime is referred to in inconsistent ways, making the signature confusing
  = note: `#[warn(mismatched_lifetime_syntaxes)]` on by default
help: use `'_` for type paths
  |
1 | fn items(scores: &[u8]) -> std::slice::Iter<'_, u8> {
  |                                             +++

We first attempted to improve this situation back in 2018 as part of the rust_2018_idioms lint group, but strong feedback about the elided_lifetimes_in_paths lint showed that it was too blunt of a hammer as it warns about lifetimes which don't matter to understand the function:

use std::fmt;

struct Greeting;

impl fmt::Display for Greeting {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        //                -----^^^^^^^^^ expected lifetime parameter
        // Knowing that `Formatter` has a lifetime does not help the programmer
        "howdy".fmt(f)
    }
}

We then realized that the confusion we want to eliminate occurs when both

  1. lifetime elision inference rules connect an input lifetime to an output lifetime
  2. it's not syntactically obvious that a lifetime exists

There are two pieces of Rust syntax that indicate that a lifetime exists: & and ', with ' being subdivided into the inferred lifetime '_ and named lifetimes 'a. When a type uses a named lifetime, lifetime elision will not infer a lifetime for that type. Using these criteria, we can construct three groups:

| Self-evident it has a lifetime | Allow lifetime elision to infer a lifetime | Examples | | ------------------------------ | ------------------------------------------ | --------------------------------- | | No | Yes | ContainsLifetime | | Yes | Yes | &T, &'_ T, ContainsLifetime<'_> | | Yes | No | &'a T, ContainsLifetime<'a> |

The mismatched_lifetime_syntaxes lint checks that the inputs and outputs of a function belong to the same group. For the initial motivating example above, &[u8] falls into the second group while std::slice::Iter<u8> falls into the first group. We say that the lifetimes in the first group are hidden.

Because the input and output lifetimes belong to different groups, the lint will warn about this function, reducing confusion about when a value has a meaningful lifetime that isn't visually obvious.

The mismatched_lifetime_syntaxes lint supersedes the elided_named_lifetimes lint, which did something similar for named lifetimes specifically.

Future work on the elided_lifetimes_in_paths lint intends to split it into more focused sub-lints with an eye to warning about a subset of them eventually.

More x86 target features

The target_feature attribute now supports the sha512, sm3, sm4, kl and widekl target features on x86. Additionally a number of avx512 intrinsics and target features are also supported on x86:

#[target_feature(enable = "avx512bw")]
pub fn cool_simd_code(/* .. */) -> /* ... */ {
    /* ... */
}


Cross-compiled doctests

Doctests will now be tested when running cargo test --doc --target other_target, this may result in some amount of breakage due to would-be-failing doctests now being tested.

Failing tests can be disabled by annotating the doctest with ignore-<target> (docs):

/// ```ignore-x86_64
/// panic!("something")
/// ```
pub fn my_function() { }

i128 and u128 in extern "C" functions

i128 and u128 no longer trigger the improper_ctypes_definitions lint, meaning these types may be used in extern "C" functions without warning. This comes with some caveats:

  • The Rust types are ABI- and layout-compatible with (unsigned) __int128 in C when the type is available.
  • On platforms where __int128 is not available, i128 and u128 do not necessarily align with any C type.
  • i128 is not necessarily compatible with _BitInt(128) on any platform, because _BitInt(128) and __int128 may not have the same ABI (as is the case on x86-64).

This is the last bit of follow up to the layout changes from last year: https://blog.rust-lang.org/2024/03/30/i128-layout-update/.

Demoting x86_64-apple-darwin to Tier 2 with host tools

GitHub will soon discontinue providing free macOS x86_64 runners for public repositories. Apple has also announced their plans for discontinuing support for the x86_64 architecture.

In accordance with these changes, the Rust project is in the process of demoting the x86_64-apple-darwin target from Tier 1 with host tools to Tier 2 with host tools. This means that the target, including tools like rustc and cargo, will be guaranteed to build but is not guaranteed to pass our automated test suite.

We expect that the RFC for the demotion to Tier 2 with host tools will be accepted between the releases of Rust 1.89 and 1.90, which means that Rust 1.89 will be the last release of Rust where x86_64-apple-darwin is a Tier 1 target.

For users, this change will not immediately cause impact. Builds of both the standard library and the compiler will still be distributed by the Rust Project for use via rustup or alternative installation methods while the target remains at Tier 2. Over time, it's likely that reduced test coverage for this target will cause things to break or fall out of compatibility with no further announcements.

Standards Compliant C ABI on the wasm32-unknown-unknown target

extern "C" functions on the wasm32-unknown-unknown target now have a standards compliant ABI. See this blog post for more information: https://blog.rust-lang.org/2025/04/04/c-abi-changes-for-wasm32-unknown-unknown.

Platform Support

Refer to Rust’s platform support page for more information on Rust’s tiered platform support.

Stabilized APIs

These previously stable APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.89.0

Many people came together to create Rust 1.89.0. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

crates.io: development update

Since our last development update in February 2025, we have continued to make significant improvements to crates.io. In this blog post, we want to give you an update on the latest changes that we have made to crates.io over the past few months.

Trusted Publishing

We are excited to announce that we have implemented "Trusted Publishing" support on crates.io, as described in RFC #3691. This feature was inspired by the PyPI team's excellent work in this area, and we want to thank them for the inspiration!

Trusted Publishing eliminates the need for GitHub Actions secrets when publishing crates from your CI/CD pipeline. Instead of managing API tokens, you can now configure which GitHub repository you trust directly on crates.io. That repository is then allowed to request a short-lived API token for publishing in a secure way using OpenID Connect (OIDC). While Trusted Publishing is currently limited to GitHub Actions, we have built it in a way that allows other CI/CD providers like GitLab CI to be supported in the future.

To get started with Trusted Publishing, you'll need to publish your first release manually. After that, you can set up trusted publishing for future releases. The detailed documentation is available at https://crates.io/docs/trusted-publishing.

Trusted Publishers Settings

Here's an example of how to set up GitHub Actions to use Trusted Publishing:

name: Publish to crates.io

on:
  push:
    tags: ['v*']  # Triggers when pushing tags starting with 'v'

jobs:
  publish:
    runs-on: ubuntu-latest

    environment: release  # Optional: for enhanced security
    permissions:
      id-token: write     # Required for OIDC token exchange

    steps:
    - uses: actions/checkout@v4
    - uses: rust-lang/crates-io-auth-action@v1
      id: auth
    - run: cargo publish
      env:
        CARGO_REGISTRY_TOKEN: ${{ steps.auth.outputs.token }}

OpenGraph Images

Previously, crates.io used a single OpenGraph image for all pages. We have now implemented dynamic OpenGraph image generation, where each crate has a dedicated image that is regenerated when new versions are published.

These images include the crate name, keywords, description, latest version (or rather the default version that we show for the crate), number of releases, license, and crate size. This provides much more useful information when crates.io links are shared on social media platforms or in chat applications.

OpenGraph Image for the bon crate

The image generation has been extracted to a dedicated crate: crates_io_og_image (GitHub). We're also adding basic theming support in PR #3 to allow docs.rs to reuse the code for their own OpenGraph images.

Under the hood, the image generation uses two other excellent Rust projects: Typst for layout and text rendering, and oxipng for PNG optimization.

docs.rs rebuilds

Crate owners can now trigger documentation rebuilds for docs.rs directly from the crate's version list on crates.io. This can be useful when docs.rs builds have failed or when you want to take advantage of new docs.rs features without having to publish a new release just for that.

docs.rs Rebuild Confirmation

We would like to thank our crates.io team member @eth3lbert for implementing the initial version of this feature in PR #11422.

README alert support

We've added support for rendering GitHub-style alerts in README files. This feature allows crate authors to use alert blocks like > [!NOTE], > [!WARNING], and > [!CAUTION] in their README markdown, which will now be properly styled and displayed on crates.io.

README alerts example

This enhancement was also implemented by @eth3lbert in PR #11441, building on initial work by @kbdharun.

Miscellaneous

These were some of the more visible changes to crates.io over the past couple of months, but a lot has happened "under the hood" as well. Here are a couple of examples:

Email system refactoring

Previously, we used the format!() macro and string concatenation to create emails, which made them hard to maintain and inconsistent in styling. We have migrated to the minijinja crate and now use templates instead.

The new system includes a template inheritance system for consistent branding across all emails. This change also enables us to support HTML emails in the future.

SemVer sorting optimization

Previously, we had to load all versions from the database and sort them by SemVer on the API server, which was inefficient for crates with many versions. Our PostgreSQL provider did not support the semver extension, so we had to implement sorting in application code.

PR #10763 takes advantage of JSONB support in PostgreSQL and their btree ordering specification to implement SemVer sorting on the database side. This reduces the load on our API servers and improves response times for crates with many versions.

Feedback

We hope you enjoyed this update on the development of crates.io. If you have any feedback or questions, please let us know on Zulip or GitHub. We are always happy to hear from you and are looking forward to your feedback!

Rust Blog

Stabilizing naked functions

Rust 1.88.0 stabilizes the #[unsafe(naked)] attribute and the naked_asm! macro which are used to define naked functions.

A naked function is marked with the #[unsafe(naked)] attribute, and its body consists of a single naked_asm! call. For example:

/// SAFETY: Respects the 64-bit System-V ABI.
#[unsafe(naked)]
pub extern "sysv64" fn wrapping_add(a: u64, b: u64) -> u64 {
    // Equivalent to `a.wrapping_add(b)`.
    core::arch::naked_asm!(
        "lea rax, [rdi + rsi]",
        "ret"
    );
}

What makes naked functions special — and gives them their name — is that the handwritten assembly block defines the entire function body. Unlike non-naked functions, the compiler does not add any special handling for arguments or return values.

This feature is a more ergonomic alternative to defining functions using global_asm!. Naked functions are used in low-level settings like Rust's compiler-builtins, operating systems, and embedded applications.

Why use naked functions?

But wait, if naked functions are just syntactic sugar for global_asm!, why add them in the first place?

To see the benefits, let's rewrite the wrapping_add example from the introduction using global_asm!:

// SAFETY: `wrapping_add` is defined in this module,
// and expects the 64-bit System-V ABI.
unsafe extern "sysv64" {
    safe fn wrapping_add(a: u64, b: u64) -> u64
}

core::arch::global_asm!(
    r#"
        // Platform-specific directives that set up a function.
        .section .text.wrapping_add,"ax",@progbits
        .p2align 2
        .globl wrapping_add
        .type wrapping_add,@function

wrapping_add:
        lea rax, [rdi + rsi]
        ret

.Ltmp0:
        .size wrapping_add, .Ltmp0-wrapping_add
    "#
);

The assembly block starts and ends with the directives (.section, .p2align, etc.) that are required to define a function. These directives are mechanical, but they are different between object file formats. A naked function will automatically emit the right directives.

Next, the wrapping_add name is hardcoded, and will not participate in Rust's name mangling. That makes it harder to write cross-platform code, because different targets have different name mangling schemes (e.g. x86_64 macOS prefixes symbols with _, but Linux does not). The unmangled symbol is also globally visible — so that the extern block can find it — which can cause symbol resolution conflicts. A naked function's name does participate in name mangling and won't run into these issues.

A further limitation that this example does not show is that functions defined using global assembly cannot use generics. Especially const generics are useful in combination with assembly.

Finally, having just one definition provides a consistent place for (safety) documentation and attributes, with less risk of them getting out of date. Proper safety comments are essential for naked functions. The naked attribute is unsafe because the ABI (sysv64 in our example), the signature, and the implementation have to be consistent.

How did we get here?

Naked functions have been in the works for a long time.

The original RFC for naked functions is from 2015. That RFC was superseded by RFC 2972 in 2020. Inline assembly in Rust had changed substantially at that point, and the new RFC limited the body of naked functions to a single asm! call with some additional constraints. And now, 10 years after the initial proposal, naked functions are stable.

Two additional notable changes helped prepare naked functions for stabilization:

Introduction of the naked_asm! macro

The body of a naked function must be a single naked_asm! call. This macro is a blend between asm! (it is in a function body) and global_asm! (only some operand types are accepted).

The initial implementation of RFC 2972 added lints onto a standard asm! call in a naked function. This approach made it hard to write clear error messages and documentation. With the dedicated naked_asm! macro the behavior is much easier to specify.

Lowering to global_asm!

The initial implementation relied on LLVM to lower functions with the naked attribute for code generation. This approach had two issues:

  • LLVM would sometimes add unexpected additional instructions to what the user wrote.
  • Rust has non-LLVM code generation backends now, and they would have had to implement LLVM's (unspecified!) behavior.

The implementation that is stabilized now instead converts the naked function into a piece of global assembly. The code generation backends can already emit global assembly, and this strategy guarantees that the whole body of the function is just the instructions that the user wrote.

What's next for assembly?

We're working on further assembly ergonomics improvements. If naked functions are something you are excited about and (may) use, we'd appreciate you testing these new features and providing feedback on their designs.

extern "custom" functions

Naked functions usually get the extern "C" calling convention. But often that calling convention is a lie. In many cases, naked functions don't implement an ABI that Rust knows about. Instead they use some custom calling convention that is specific to that function.

The abi_custom feature adds extern "custom" functions and blocks, which allows us to correctly write code like this example from compiler-builtins:

#![feature(abi_custom)]

/// Division and modulo of two numbers using Arm's nonstandard ABI.
///
/// ```c
/// typedef struct { int quot; int rem; } idiv_return;
///  __value_in_regs idiv_return __aeabi_idivmod(int num, int denom);
/// ```
// SAFETY: The assembly implements the expected ABI, and "custom"
// ensures this function cannot be called directly.
#[unsafe(naked)]
pub unsafe extern "custom" fn __aeabi_idivmod() {
    core::arch::naked_asm!(
        "push {{r0, r1, r4, lr}}", // Back up clobbers.
        "bl {trampoline}",         // Call an `extern "C"` function for a / b.
        "pop {{r1, r2}}",
        "muls r2, r2, r0",         // Perform the modulo.
        "subs r1, r1, r2",
        "pop {{r4, pc}}",          // Restore clobbers, implicit return by setting `pc`.
        trampoline = sym crate::arm::__aeabi_idiv,
    );
}

A consequence of using a custom calling convention is that such functions cannot be called using a Rust call expression; the compiler simply does not know how to generate correct code for such a call. Instead the compiler will error when the program does try to call an extern "custom" function, and the only way to execute the function is using inline assembly.

cfg on lines of inline assembly

The cfg_asm feature adds the ability to annotate individual lines of an assembly block with #[cfg(...)] or #[cfg_attr(..., ...)]. Configuring specific sections of assembly is useful to make assembly depend on, for instance, the target, target features, or feature flags. For example:

#![feature(cfg_asm)]

global_asm!(
    // ...

    // If enabled, initialise the SP. This is normally
    // initialised by the CPU itself or by a bootloader, but
    // some debuggers fail to set it when resetting the
    // target, leading to stack corruptions.
    #[cfg(feature = "set-sp")]
    "ldr r0, =_stack_start
     msr msp, r0",

     // ...
)

This example is from the cortex-m crate that currently has to use a custom macro that duplicates the whole assembly block for every use of #[cfg(...)]. With cfg_asm, that will no longer be necessary.

Continue Reading…

Rust Blog

Announcing Rust 1.88.0

The Rust team is happy to announce a new version of Rust, 1.88.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.88.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.88.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.88.0 stable

Let chains

This feature allows &&-chaining let statements inside if and while conditions, even intermingling with boolean expressions, so there is less distinction between if/if let and while/while let. The patterns inside the let sub-expressions can be irrefutable or refutable, and bindings are usable in later parts of the chain as well as the body.

For example, this snippet combines multiple conditions which would have required nesting if let and if blocks before:

if let Channel::Stable(v) = release_info()
    && let Semver { major, minor, .. } = v
    && major == 1
    && minor == 88
{
    println!("`let_chains` was stabilized in this version");
}

Let chains are only available in the Rust 2024 edition, as this feature depends on the if let temporary scope change for more consistent drop order.

Earlier efforts tried to work with all editions, but some difficult edge cases threatened the integrity of the implementation. 2024 made it feasible, so please upgrade your crate's edition if you'd like to use this feature!

Naked functions

Rust now supports writing naked functions with no compiler-generated epilogue and prologue, allowing full control over the generated assembly for a particular function. This is a more ergonomic alternative to defining functions in a global_asm! block. A naked function is marked with the #[unsafe(naked)] attribute, and its body consists of a single naked_asm! call.

For example:

#[unsafe(naked)]
pub unsafe extern "sysv64" fn wrapping_add(a: u64, b: u64) -> u64 {
    // Equivalent to `a.wrapping_add(b)`.
    core::arch::naked_asm!(
        "lea rax, [rdi + rsi]",
        "ret"
    );
}

The handwritten assembly block defines the entire function body: unlike non-naked functions, the compiler does not add any special handling for arguments or return values. Naked functions are used in low-level settings like Rust's compiler-builtins, operating systems, and embedded applications.

Look for a more detailed post on this soon!

Boolean configuration

The cfg predicate language now supports boolean literals, true and false, acting as a configuration that is always enabled or disabled, respectively. This works in Rust conditional compilation with cfg and cfg_attr attributes and the built-in cfg! macro, and also in Cargo [target] tables in both configuration and manifests.

Previously, empty predicate lists could be used for unconditional configuration, like cfg(all()) for enabled and cfg(any()) for disabled, but this meaning is rather implicit and easy to get backwards. cfg(true) and cfg(false) offer a more direct way to say what you mean.

See RFC 3695 for more background!

Cargo automatic cache cleaning

Starting in 1.88.0, Cargo will automatically run garbage collection on the cache in its home directory!

When building, Cargo downloads and caches crates needed as dependencies. Historically, these downloaded files would never be cleaned up, leading to an unbounded amount of disk usage in Cargo's home directory. In this version, Cargo introduces a garbage collection mechanism to automatically clean up old files (e.g. .crate files). Cargo will remove files downloaded from the network if not accessed in 3 months, and files obtained from the local system if not accessed in 1 month. Note that this automatic garbage collection will not take place if running offline (using --offline or --frozen).

Cargo 1.78 and newer track the access information needed for this garbage collection. This was introduced well before the actual cleanup that's starting now, in order to reduce cache churn for those that still use prior versions. If you regularly use versions of Cargo even older than 1.78, in addition to running current versions of Cargo, and you expect to have some crates accessed exclusively by the older versions of Cargo and don't want to re-download those crates every ~3 months, you may wish to set cache.auto-clean-frequency = "never" in the Cargo configuration, as described in the docs.

For more information, see the original unstable announcement of this feature. Some parts of that design remain unstable, like the gc subcommand tracked in cargo#13060, so there's still more to look forward to!

Stabilized APIs

These previously stable APIs are now stable in const contexts:

Other changes

The i686-pc-windows-gnu target has been demoted to Tier 2, as mentioned in an earlier post. This won't have any immediate effect for users, since both the compiler and standard library tools will still be distributed by rustup for this target. However, with less testing than it had at Tier 1, it has more chance of accumulating bugs in the future.

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.88.0

Many people came together to create Rust 1.88.0. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

Rust compiler performance survey 2025

We're launching a Rust Compiler Performance Survey.

Long compile times of Rust code are frequently being cited as one of the biggest challenges limiting the productivity of Rust developers. Rust compiler contributors are of course aware of that, and they are continuously working to improve the situation, by finding new ways of speeding up the compiler, triaging performance regressions and measuring our long-term performance improvements. Recently, we also made progress on some large changes that have been in the making for a long time, which could significantly improve compiler performance by default.

When we talk about compilation performance, it is important to note that it is not always so simple as determining how long does it take rustc to compile a crate. There are many diverse development workflows that might have competing trade-offs, and that can be bottlenecked by various factors, such as the integration of the compiler with the used build system.

In order to better understand these workflows, we have prepared a Rust Compiler Performance Survey. This survey is focused specifically on compilation performance, which allows us to get more detailed data than what we usually get from the annual State of Rust survey. The data from this survey will help us find areas where we should focus our efforts on improving the productivity of Rust developers.

You can fill out the survey here.

Filling the survey should take you approximately 10 minutes, and the survey is fully anonymous. We will accept submissions until Monday, July 7th, 2025. After the survey ends, we will evaluate the results and post key insights on this blog.

We invite you to fill the survey, as your responses will help us improve Rust compilation performance. Thank you!

Continue Reading…

Rust Blog

Demoting i686-pc-windows-gnu to Tier 2

In Rust 1.88.0, the Tier 1 target i686-pc-windows-gnu will be demoted to Tier 2. As a Tier 2 Target, builds will continue to be distributed for both the standard library and the compiler.

Background

Rust has supported Windows for a long time, with two different flavors of Windows targets: MSVC-based and GNU-based. MSVC-based targets (for example the most popular Windows target x86_64-pc-windows-msvc) use Microsoft’s native linker and libraries, while GNU-based targets (like i686-pc-windows-gnu) are built entirely from free software components like gcc, ld, and mingw-w64.

The major reason to use a GNU-based toolchain instead of the native MSVC-based one is cross-compilation and licensing. link.exe only runs on Windows (barring Wine hacks) and requires a license for commercial usage.

x86_64-pc-windows-gnu and i686-pc-windows-gnu are currently both Tier 1 with host tools. The Target Tier Policy contains more details on what this entails, but the most important part is that tests for these targets are being run on every merged PR. This is the highest level of support we have, and is only used for the most high value targets (the most popular Linux, Windows, and Apple targets).

The *-windows-gnu targets currently do not have any dedicated target maintainers. We do not have a lot of expertise for this toolchain, and issues often aren't fixed and cause problems in CI that we have a hard time to debug.

The 32-bit version of this target is especially problematic and has significantly less usage than x86_64-pc-windows-gnu, which is why i686-pc-windows-gnu is being demoted to Tier 2.

What changes?

After Rust 1.88.0, i686-pc-windows-gnu will now be Tier 2 with host tools. For users, nothing will change immediately. Builds of both the standard library and the compiler will still be distributed by the Rust Project for use via rustup or alternative installation methods.

This does mean that this target will likely accumulate bugs faster in the future because of the reduced testing.

Future

If no maintainers are found and the *-windows-gnu targets continue causing problems, they may be demoted further. No concrete plans about this have been made yet.

If you rely on the *-windows-gnu targets and have expertise in this area, we would be very happy to have you as a target maintainer. You can check the Target Tier Policy for what exactly that would entail.

For more details on the motivation of the demotion, see RFC 3771 which proposed this change.

Continue Reading…

Rust Blog

Announcing Rust 1.87.0 and ten years of Rust!

Live from the 10 Years of Rust celebration in Utrecht, Netherlands, the Rust team is happy to announce a new version of Rust, 1.87.0!

picture of Rustaceans at the release party

Today's release day happens to fall exactly on the 10 year anniversary ofRust 1.0!

Thank you to the myriad contributors who have worked on Rust, past and present. Here's to many more decades of Rust! 🎉


As usual, the new version includes all the changes that have been part of the beta version in the past six weeks, following the consistent regular release cycle that we have followed since Rust 1.0.

If you have a previous version of Rust installed via rustup, you can get 1.87.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.87.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.87.0 stable

Anonymous pipes

1.87 adds access to anonymous pipes to the standard library. This includes integration with std::process::Command's input/output methods. For example, joining the stdout and stderr streams into one is now relatively straightforward, as shown below, while it used to require either extra threads or platform-specific functions.

use std::process::Command;
use std::io::Read;

let (mut recv, send) = std::io::pipe()?;

let mut command = Command::new("path/to/bin")
    // Both stdout and stderr will write to the same pipe, combining the two.
    .stdout(send.try_clone()?)
    .stderr(send)
    .spawn()?;

let mut output = Vec::new();
recv.read_to_end(&mut output)?;

// It's important that we read from the pipe before the process exits, to avoid
// filling the OS buffers if the program emits too much output.
assert!(command.wait()?.success());

Safe architecture intrinsics

Most std::arch intrinsics that are unsafe only due to requiring target features to be enabled are now callable in safe code that has those features enabled. For example, the following toy program which implements summing an array using manual intrinsics can now use safe code for the core loop.

#![forbid(unsafe_op_in_unsafe_fn)]

use std::arch::x86_64::*;

fn sum(slice: &[u32]) -> u32 {
    #[cfg(target_arch = "x86_64")]
    {
        if is_x86_feature_detected!("avx2") {
            // SAFETY: We have detected the feature is enabled at runtime,
            // so it's safe to call this function.
            return unsafe { sum_avx2(slice) };
        }
    }

    slice.iter().sum()
}

#[target_feature(enable = "avx2")]
#[cfg(target_arch = "x86_64")]
fn sum_avx2(slice: &[u32]) -> u32 {
    // SAFETY: __m256i and u32 have the same validity.
    let (prefix, middle, tail) = unsafe { slice.align_to::<__m256i>() };
    
    let mut sum = prefix.iter().sum::<u32>();
    sum += tail.iter().sum::<u32>();
    
    // Core loop is now fully safe code in 1.87, because the intrinsics require
    // matching target features (avx2) to the function definition.
    let mut base = _mm256_setzero_si256();
    for e in middle.iter() {
        base = _mm256_add_epi32(base, *e);
    }
    
    // SAFETY: __m256i and u32 have the same validity.
    let base: [u32; 8] = unsafe { std::mem::transmute(base) };
    sum += base.iter().sum::<u32>();
    
    sum
}

asm! jumps to Rust code

Inline assembly (asm!) can now jump to labeled blocks within Rust code. This enables more flexible low-level programming, such as implementing optimized control flow in OS kernels or interacting with hardware more efficiently.

  • The asm! macro now supports a label operand, which acts as a jump target.
  • The label must be a block expression with a return type of () or !.
  • The block executes when jumped to, and execution continues after the asm! block.
  • Using output and label operands in the same asm! invocation remains unstable.
unsafe {
    asm!(
        "jmp {}",
        label {
            println!("Jumped from asm!");
        }
    );
}

For more details, please consult the reference.

Precise capturing (+ use<...>) in impl Trait in trait definitions

This release stabilizes specifying the specific captured generic types and lifetimes in trait definitions using impl Trait return types. This allows using this feature in trait definitions, expanding on the stabilization for non-trait functions in1.82.

Some example desugarings:

trait Foo {
    fn method<'a>(&'a self) -> impl Sized;
    
    // ... desugars to something like:
    type Implicit1<'a>: Sized;
    fn method_desugared<'a>(&'a self) -> Self::Implicit1<'a>;
    
    // ... whereas with precise capturing ...
    fn precise<'a>(&'a self) -> impl Sized + use<Self>;
    
    // ... desugars to something like:
    type Implicit2: Sized;
    fn precise_desugared<'a>(&'a self) -> Self::Implicit2;
}

Stabilized APIs

These previously stable APIs are now stable in const contexts:

i586-pc-windows-msvc target removal

The Tier 2 target i586-pc-windows-msvc has been removed. i586-pc-windows-msvc's difference to the much more popular Tier 1 target i686-pc-windows-msvc is that i586-pc-windows-msvc does not require SSE2 instruction support. But Windows 10, the minimum required OS version of all windows targets (except the win7 targets), requires SSE2 instructions itself.

All users currently targeting i586-pc-windows-msvc should migrate to i686-pc-windows-msvc.

You can check the Major Change Proposal for more information.

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.87.0

Many people came together to create Rust 1.87.0. We couldn't have done it without all of you. Thanks!

Rust Blog

Announcing Google Summer of Code 2025 selected projects

The Rust Project is participating in Google Summer of Code (GSoC) again this year. GSoC is a global program organized by Google that is designed to bring new contributors to the world of open-source.

In March, we published a list of GSoC project ideas, and started discussing these projects with potential GSoC applicants on our Zulip. We had many interesting discussions with the potential contributors, and even saw some of them making non-trivial contributions to various Rust Project repositories, even before GSoC officially started!

After the initial discussions, GSoC applicants prepared and submitted their project proposals. We received 64 proposals this year, almost exactly the same number as last year. We are happy to see that there was again so much interest in our projects.

A team of mentors primarily composed of Rust Project contributors then thoroughly examined the submitted proposals. GSoC required us to produce a ranked list of the best proposals, which was a challenging task in itself since Rust is a big project with many priorities! Same as last year, we went through several rounds of discussions and considered many factors, such as prior conversations with the given applicant, the quality of their proposal, the importance of the proposed project for the Rust Project and its wider community, but also the availability of mentors, who are often volunteers and thus have limited time available for mentoring.

As is usual in GSoC, even though some project topics received multiple proposals1, we had to pick only one proposal per project topic. We also had to choose between great proposals targeting different work to avoid overloading a single mentor with multiple projects.

In the end, we narrowed the list down to a smaller number of the best proposals that we could still realistically support with our available mentor pool. We submitted this list and eagerly awaited how many of them would be accepted into GSoC.

Selected projects

On the 8th of May, Google has announced the accepted projects. We are happy to share that 19 Rust Project proposals were accepted by Google for Google Summer of Code 2025. That's a lot of projects, which makes us super excited about GSoC 2025!

Below you can find the list of accepted proposals (in alphabetical order), along with the names of their authors and the assigned mentor(s):

Congratulations to all applicants whose project was selected! The mentors are looking forward to working with you on these exciting projects to improve the Rust ecosystem. You can expect to hear from us soon, so that we can start coordinating the work on your GSoC projects.

We would also like to thank all the applicants whose proposal was sadly not accepted, for their interactions with the Rust community and contributions to various Rust projects. There were some great proposals that did not make the cut, in large part because of limited mentorship capacity. However, even if your proposal was not accepted, we would be happy if you would consider contributing to the projects that got you interested, even outside GSoC! Our project idea list is still actual and could serve as a general entry point for contributors that would like to work on projects that would help the Rust Project maintainers and the Rust ecosystem. Some of the Rust Project Goals are also looking for help.

There is also a good chance we'll participate in GSoC next year as well (though we can't promise anything at this moment), so we hope to receive your proposals again in the future!

The accepted GSoC projects will run for several months. After GSoC 2025 finishes (in autumn of 2025), we will publish a blog post in which we will summarize the outcome of the accepted projects.

  1. The most popular project topic received seven different proposals!

Continue Reading…

Rust Blog

Announcing rustup 1.28.2

The rustup team is happy to announce the release of rustup version 1.28.2.Rustup is the recommended tool to install Rust, a programming language that empowers everyone to build reliable and efficient software.

What's new in rustup 1.28.2

The headlines of this release are:

  • The cURL download backend and the native-tls TLS backend are now officially deprecated and a warning will start to show up when they are used. pr#4277
    • While rustup predates reqwest and rustls, the rustup team has long wanted to standardize on an HTTP + TLS stack with more components in Rust, which should increase security, potentially improve performance, and simplify maintenance of the project. With the default download backend already switched to reqwest since 2019, the team thinks it is time to focus maintenance on the default stack powered by these two libraries.
    • For people who have set RUSTUP_USE_CURL=1 or RUSTUP_USE_RUSTLS=0 in their environment to work around issues with rustup, please try to unset these after upgrading to 1.28.2 and filean issue if you still encounter problems.
  • The version of rustup can be pinned when installing via rustup-init.sh, andrustup self update can be used to upgrade/downgrade rustup 1.28.2+ to a given version. To do so, set the RUSTUP_VERSION environment variable to the desired version (for example 1.28.2).pr#4259
  • rustup set auto-install disable can now be used to disable automatic installation of the toolchain. This is similar to the RUSTUP_AUTO_INSTALL environment variable introduced in 1.28.1 but with a lower priority. pr#4254
  • Fixed a bug in Nushell integration that might generate invalid commands in the shell configuration. Reinstalling rustup might be required for the fix to work. pr#4265

How to update

If you have a previous version of rustup installed, getting the new one is as easy as stopping any programs which may be using rustup (e.g. closing your IDE) and running:

$ rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

$ rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

Rustup's documentation is also available in the rustup book.

Caveats

Rustup releases can come with problems not caused by rustup itself but just due to having a new release.

In particular, anti-malware scanners might block rustup or stop it from creating or copying files, especially when installing rust-docs which contains many small files.

Issues like this should be automatically resolved in a few weeks when the anti-malware scanners are updated to be aware of the new rustup release.

Thanks

Thanks again to all the contributors who made this rustup release possible!

Continue Reading…

Rust Blog

crates.io security incident: improperly stored session cookies

Today the crates.io team discovered that the contents of the cargo_sessioncookie were being persisted to our error monitoring service,Sentry, as part of event payloads sent when an error occurs in the crates.io backend. The value of this cookie is a signed value that identifies the currently logged in user, and therefore these cookie values could be used to impersonate any logged in user.

Sentry access is limited to a trusted subset of the crates.io team, Rust infrastructure team, and the crates.io on-call rotation team, who already have access to the production environment of crates.io. There is no evidence that these values were ever accessed or used.

Nevertheless, out of an abundance of caution, we have taken these actions today:

  1. We have merged and deployed a change to redact all cookie values from all Sentry events.
  2. We have invalidated all logged in sessions, thus making the cookies stored in Sentry useless. In effect, this means that every crates.io user has been logged out of their browser session(s).

Note that API tokens are not affected by this: they are transmitted using the Authorization HTTP header, and were already properly redacted before events were stored in Sentry. All existing API tokens will continue to work.

We apologise for the inconvenience. If you have any further questions, please contact us onZulip orGitHub.

Continue Reading…