Rust Blog: Posts

Rust Blog

Announcing Rust 1.77.1

The Rust team has published a new point release of Rust, 1.77.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.77.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.77.1

Cargo enabled stripping of debuginfo in release builds by defaultin Rust 1.77.0. However, due to a pre-existing issue, debuginfo stripping does not behave in the expected way on Windows with the MSVC toolchain.

Rust 1.77.1 therefore disables the new Cargo behavior on Windows for targets that use MSVC. There are no changes for other targets. We plan to eventually re-enable debuginfo stripping in release mode in a later Rust release.

Contributors to 1.77.1

Many people came together to create Rust 1.77.1. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

Announcing Rust 1.77.0

The Rust team is happy to announce a new version of Rust, 1.77.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.77.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.77.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.77.0 stable

This release is relatively minor, but as always, even incremental improvements lead to a greater whole. A few of those changes are highlighted in this post, and others may yet fill more niche needs.

C-string literals

Rust now supports C-string literals (c"abc") which expand to a nul-byte terminated string in memory of type &'static CStr. This makes it easier to write code interoperating with foreign language interfaces which require nul-terminated strings, with all of the relevant error checking (e.g., lack of interior nul byte) performed at compile time.

Support for recursion in async fn

Async functions previously could not call themselves due to a compiler limitation. In 1.77, that limitation has been lifted, so recursive calls are permitted so long as they use some form of indirection to avoid an infinite size for the state of the function.

This means that code like this now works:

async fn fib(n: u32) -> u32 {
   match n {
       0 | 1 => 1,
       _ => Box::pin(fib(n-1)).await + Box::pin(fib(n-2)).await
   }
}

offset_of!

1.77.0 stabilizes offset_of! for struct fields, which provides access to the byte offset of the relevant public field of a struct. This macro is most useful when the offset of a field is required without an existing instance of a type. Implementing such a macro is already possible on stable, but without an instance of the type the implementation would require tricky unsafe code which makes it easy to accidentally introduce undefined behavior.

Users can now access the offset of a public field with offset_of!(StructName, field). This expands to a usize expression with the offset in bytes from the start of the struct.

Enable strip in release profiles by default

Cargo profileswhich do not enable debuginfo in outputs (e.g., debug = 0) will enable strip = "debuginfo" by default.

This is primarily needed because the (precompiled) standard library ships with debuginfo, which means that statically linked results would include the debuginfo from the standard library even if the local compilations didn't explicitly request debuginfo.

Users which do want debuginfo can explicitly enable it with thedebugflag in the relevant Cargo profile.

Clippy adds a new incompatible_msrv lint

The Rust project only supports the latest stable release of Rust. Some libraries aim to have an older minimum supported Rust version (MSRV), typically verifying this support by compiling in CI with an older release. However, when developing new code, it's convenient to use latest documentation and the latest toolchain with fixed bugs, performance improvements, and other improvements. This can make it easy to accidentally start using an API that's only available on newer versions of Rust.

Clippy has added a new lint, incompatible_msrv, which will inform users if functionality being referenced is only available on newer versions than theirdeclared MSRV.

Stabilized APIs

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.77.0

Many people came together to create Rust 1.77.0. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

Announcing Rustup 1.27.0

The rustup team is happy to announce the release of rustup version 1.27.0.Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of rustup installed, getting rustup 1.27.0 is as easy as stopping any programs which may be using Rustup (e.g. closing your IDE) and running:

$ rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

$ rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

What's new in rustup 1.27.0

This long-awaited Rustup release has gathered all the new features and fixes since April 2023. These changes include improvements in Rustup's maintainability, user experience, compatibility and documentation quality.

Also, it's worth mentioning that Dirkjan Ochtman (djc) and rami3l (rami3l) have joined the team and are coordinating this new release.

At the same time, we have granted Daniel Silverstone (kinnison) and 二手掉包工程师 (hi-rustin) their well-deserved alumni status in this release cycle. Kudos for your contributions over the years and your continuous guidance on maintaining the project!

The headlines for this release are:

  1. Basic support for the fish shell has been added. If you're using fish, PATH configs for your Rustup installation will be added automatically from now on.
    Please note that this will only take effect on installation, so if you have already installed Rustup on your machine, you will need to reinstall it. For example, if you have installed Rustup via rustup.rs, simply follow rustup.rs's instructions again; if you have installed Rustup using some other method, you might want to reinstall it using that same method.
  2. Rustup support for loongarch64-unknown-linux-gnu as a host platform has been added. This means you should be able to install Rustup via rustup.rs and no longer have to rely on loongnix.cn or self-compiled installations.
    Please note that as of March 2024, loongarch64-unknown-linux-gnu is a "tier 2 platform with host tools", so Rustup is guaranteed to build for this platform. According to Rust's target tier policy, this does not imply that these builds are also guaranteed to work, but they often work to quite a good degree and patches are always welcome!

Full details are available in the changelog!

Rustup's documentation is also available in the rustup book.

Thanks

Thanks again to all the contributors who made rustup 1.27.0 possible!

  • Anthony Perkins (acperkins)
  • Tianqi (airstone42)
  • Alex Gaynor (alex)
  • Alex Hudspith (alexhudspith)
  • Alan Somers (asomers)
  • Brett (brettearle)
  • Burak Emir (burakemir)
  • Chris Denton (ChrisDenton)
  • cui fliter (cuishuang)
  • Dirkjan Ochtman (djc)
  • Dezhi Wu (dzvon)
  • Eric Swanson (ericswanson-dfinity)
  • Prikshit Gautam (gautamprikshit1)
  • hev (heiher)
  • 二手掉包工程师 (hi-rustin)
  • Kamila Borowska (KamilaBorowska)
  • klensy (klensy)
  • Jakub Beránek (Kobzol)
  • Kornel (kornelski)
  • Matt Harding (majaha)
  • Mathias Brossard (mbrossard)
  • Christian Thackston (nan60)
  • Ruohui Wang (noirgif)
  • Olivier Lemasle (olivierlemasle)
  • Chih Wang (ongchi)
  • Pavel Roskin (proski)
  • rami3l (rami3l)
  • Robert Collins (rbtcollins)
  • Sandesh Pyakurel (Sandesh-Pyakurel)
  • Waffle Maybe (WaffleLapkin)
  • Jubilee (workingjubilee)
  • WÁNG Xuěruì (xen0n)
  • Yerkebulan Tulibergenov (yerke)
  • Renovate Bot (renovate)

Continue Reading…

Rust Blog

crates.io: Download changes

Like the rest of the Rust community, crates.io has been growing rapidly, with download and package counts increasing 2-3x year-on-year. This growth doesn't come without problems, and we have made some changes to download handling on crates.io to ensure we can keep providing crates for a long time to come.

The Problem

This growth has brought with it some challenges. The most significant of these is that all download requests currently go through the crates.io API, occasionally causing scaling issues. If the API is down or slow, it affects all download requests too. In fact, the number one cause of waking up our crates.io on-call team is "slow downloads" due to the API having performance issues.

Additionally, this setup is also problematic for users outside of North America, where download requests are slow due to the distance to the crates.io API servers.

The Solution

To address these issues, over the last year we have decided to make some changes:

Starting from 2024-03-12, cargo will begin to download crates directly from our static.crates.io CDN servers.

This change will be facilitated by modifying the config.json file on the package index. In other words: no changes to cargo or your own system are needed for the changes to take effect. The config.json file is used by cargo to determine the download URLs for crates, and we will update it to point directly to the CDN servers, instead of the crates.io API.

Over the past few months, we have made several changes to the crates.io backend to enable this:

  • We announced the deprecation of "non-canonical" downloads, which would be harder to support when downloading directly from the CDN.
  • We changed how downloads are counted. Previously, downloads were counted directly on the crates.io API servers. Now, we analyze the log files from the CDN servers to count the download requests.

crates.io download graph of an arbitrary crate showing that on 2024-02-16, download numbers increased

The latter change has caused the download numbers of most crates to increase, as some download requests were not counted before. Specifically, crates.io mirrors were often downloading directly from the CDN servers already, and those downloads had previously not been counted. For crates with a lot of downloads these changes will be barely noticeable, but for smaller crates, the download numbers have increased quite a bit over the past few weeks since we enabled this change.

Expected Outcomes

We expect these changes to significantly improve the reliability and speed of downloads, as the performance of the crates.io API servers will no longer affect the download requests. Over the next few weeks, we will monitor the performance of the system to ensure that the changes have the expected effects.

We have noticed that some non-cargo build systems are not using the config.json file of the index to build the download URLs. We will reach out to the maintainers of those build systems to ensure that they are aware of the change and to help them update their systems to use the new download URLs. The old download URLs will continue to work, but these systems will be missing out on the potential performance improvement.

We are excited about these changes and believe they will greatly improve the reliability of crates.io. We look forward to hearing your feedback!

Continue Reading…

Rust Blog

Clippy: Deprecating `feature = "cargo-clippy"`

Since Clippy v0.0.97 and before it was shipped with rustup, Clippy implicitly added a feature = "cargo-clippy" config1 when linting your code with cargo clippy.

Back in the day (2016) this was necessary to allow, warn or deny Clippy lints using attributes:

#[cfg_attr(feature = "cargo-clippy", allow(clippy_lint_name))]

Doing this hasn't been necessary for a long time. Today, Clippy users will set lint levels with tool lint attributes using the clippy:: prefix:

#[allow(clippy::lint_name)]

The implicit feature = "cargo-clippy" has only been kept for backwards compatibility, but will now be deprecated.

Alternative

As there is a rare use case for conditional compilation depending on Clippy, we will provide an alternative. So in the future you will be able to use:

#[cfg(clippy)]

Transitioning

Should you have instances of feature = "cargo-clippy" in your code base, you will see a warning from the new Clippy lintclippy::deprecated_clippy_cfg_attr. This lint can automatically fix your code. So if you should see this lint triggering, just run:

cargo clippy --fix -- -Aclippy::all -Wclippy::deprecated_clippy_cfg_attr

This will fix all instances in your code.

In addition, check your .cargo/config file for:

[target.'cfg(feature = "cargo-clippy")']
rustflags = ["-Aclippy::..."]

If you have this config, you will have to update it yourself, by either changing it to cfg(clippy) or taking this opportunity to transition to setting lint levels in Cargo.toml directly.

Motivation for Deprecation

Currently, there's a call for testing, in order to stabilize checking conditional compilation at compile time, aka cargo check -Zcheck-cfg. If we were to keep the feature = "cargo-clippy" config, users would start seeing a lot of warnings on their feature = "cargo-clippy"conditions. To work around this, they would either need to allow the lint or have to add a dummy feature to their Cargo.toml in order to silence those warnings:

[features]
cargo-clippy = []

We didn't think this would be user friendly, and decided that instead we want to deprecate the implicit feature = "cargo-clippy" config and replace it with theclippy config.

  1. It's likely that you didn't even know that Clippy implicitly sets this config (which was not a Cargo feature). This is intentional, as we stopped advertising and documenting this a long time ago.

Continue Reading…

Rust Blog

Updated baseline standards for Windows targets

The minimum requirements for Tier 1 toolchains targeting Windows will increase with the 1.78 release (scheduled for May 02, 2024). Windows 10 will now be the minimum supported version for the *-pc-windows-* targets. These requirements apply both to the Rust toolchain itself and to binaries produced by Rust.

Two new targets have been added with Windows 7 as their baseline: x86_64-win7-windows-msvc and i686-win7-windows-msvc. They are starting as Tier 3 targets, meaning that the Rust codebase has support for them but we don't build or test them automatically. Once these targets reach Tier 2 status, they will be available to use via rustup.

Affected targets

  • x86_64-pc-windows-msvc
  • i686-pc-windows-msvc
  • x86_64-pc-windows-gnu
  • i686-pc-windows-gnu
  • x86_64-pc-windows-gnullvm
  • i686-pc-windows-gnullvm

Why are the requirements being changed?

Prior to now, Rust had Tier 1 support for Windows 7, 8, and 8.1 but these targets no longer meet our requirements. In particular, these targets could no longer be tested in CI which is required by the Target Tier Policy and are not supported by their vendor.

Continue Reading…

Rust Blog

Rust participates in Google Summer of Code 2024

We're writing this blog post to announce that the Rust Project will be participating in Google Summer of Code (GSoC) 2024. If you're not eligible or interested in participating in GSoC, then most of this post likely isn't relevant to you; if you are, this should contain some useful information and links.

Google Summer of Code (GSoC) is an annual global program organized by Google that aims to bring new contributors to the world of open-source. The program pairs organizations (such as the Rust Project) with contributors (usually students), with the goal of helping the participants make meaningful open-source contributions under the guidance of experienced mentors.

As of today, the organizations that have been accepted into the program have been announced by Google. The GSoC applicants now have several weeks to send project proposals to organizations that appeal to them. If their project proposal is accepted, they will embark on a 12-week journey during which they will try to complete their proposed project under the guidance of an assigned mentor.

We have prepared a list of project ideas that can serve as inspiration for potential GSoC contributors that would like to send a project proposal to the Rust organization. However, applicants can also come up with their own project ideas. You can discuss project ideas or try to find mentors in the #gsoc Zulip stream. We have also prepared a proposal guide that should help you with preparing your project proposals.

You can start discussing the project ideas with Rust Project maintainers immediately. The project proposal application period starts on March 18, 2024, and ends on April 2, 2024 at 18:00 UTC. Take note of that deadline, as there will be no extensions!

If you are interested in contributing to the Rust Project, we encourage you to check out our project idea list and send us a GSoC project proposal! Of course, you are also free to discuss these projects and/or try to move them forward even if you do not intend to (or cannot) participate in GSoC. We welcome all contributors to Rust, as there is always enough work to do.

This is the first time that the Rust Project is participating in GSoC, so we are quite excited about it. We hope that participants in the program can improve their skills, but also would love for this to bring new contributors to the Project and increase the awareness of Rust in general. We will publish another blog post later this year with more information about our participation in the program.

Continue Reading…

Rust Blog

2023 Annual Rust Survey Results

Hello, Rustaceans!

The Rust Survey Team is excited to share the results of our 2023 survey on the Rust Programming language, conducted between December 18, 2023 and January 15, 2024. As in previous years, the 2023 State of Rust Survey was focused on gathering insights and feedback from Rust users, and all those who are interested in the future of Rust more generally.

This eighth edition of the survey surfaced new insights and learning opportunities straight from the global Rust language community, which we will summarize below. In addition to this blog post, this year we have also prepared a report containing charts with aggregated results of all questions in the survey. Based on feedback from recent years, we have also tried to provide more comprehensive and interactive charts in this summary blog post. Let us know what you think!

Our sincerest thanks to every community member who took the time to express their opinions and experiences with Rust over the past year. Your participation will help us make Rust better for everyone.

There's a lot of data to go through, so strap in and enjoy!

Participation

Survey

Started

Completed

Completion rate

Views

2022

11 482

9 433

81.3%

25 581

2023

11 950

9 710

82.2%

16 028

As shown above, in 2023, we have received 37% fewer survey views in vs 2022, but saw a slight uptick in starts and completions. There are many reasons why this could have been the case, but it’s possible that because we released the 2022 analysis blog so late last year, the survey was fresh in many Rustaceans’ minds. This might have prompted fewer people to feel the need to open the most recent survey. Therefore, we find it doubly impressive that there were more starts and completions in 2023, despite the lower overall view count.

Community

This year, we have relied on automated translations of the survey, and we have asked volunteers to review them. We thank the hardworking volunteers who reviewed these automated survey translations, ultimately allowing us to offer the survey in seven languages: English, Simplified Chinese, French, German, Japanese, Russian, and Spanish. We decided not to publish the survey in languages without a translation review volunteer, meaning we could not issue the survey in Portuguese, Ukrainian, Traditional Chinese, or Korean.

The Rust Survey team understands that there were some issues with several of these translated versions, and we apologize for any difficulty this has caused. We are always looking for ways to improve going forward and are in the process of discussing improvements to this part of the survey creation process for next year.

We saw a 3pp increase in respondents taking this year’s survey in English – 80% in 2023 and 77% in 2022. Across all other languages, we saw only minor variations – all of which are likely due to us offering fewer languages overall this year due to having fewer volunteers.

Rust user respondents were asked which country they live in. The top 10 countries represented were, in order: United States (22%), Germany (12%), China (6%), United Kingdom (6%), France (6%), Canada (3%), Russia (3%), Netherlands (3%), Japan (3%), and Poland (3%) . We were interested to see a small reduction in participants taking the survey in the United States in 2023 (down 3pp from the 2022 edition) which is a positive indication of the growing global nature of our community! You can try to find your country in the chart below:

[PNG] [SVG]

Once again, the majority of our respondents reported being most comfortable communicating on technical topics in English at 92.7% — a slight difference from 93% in 2022. Again, Chinese was the second-highest choice for preferred language for technical communication at 6.1% (7% in 2022).

[PNG] [SVG]

We also asked whether respondents consider themselves members of a marginalized community. Out of those who answered, 76% selected no, 14% selected yes, and 10% preferred not to say.

We have asked the group that selected “yes” which specific groups they identified as being a member of. The majority of those who consider themselves a member of an underrepresented or marginalized group in technology identify as lesbian, gay, bisexual, or otherwise non-heterosexual. The second most selected option was neurodivergent at 41% followed by trans at 31.4%. Going forward, it will be important for us to track these figures over time to learn how our community changes and to identify the gaps we need to fill.

[PNG] [SVG]

As Rust continues to grow, we must acknowledge the diversity, equity, and inclusivity (DEI)-related gaps that exist in the Rust community. Sadly, Rust is not unique in this regard. For instance, only 20% of 2023 respondents to this representation question consider themselves a member of a racial or ethnic minority and only 26% identify as a woman. We would like to see more equitable figures in these and other categories. In 2023, the Rust Foundation formed a diversity, equity, and inclusion subcommittee on its Board of Directors whose members are aware of these results and are actively discussing ways that the Foundation might be able to better support underrepresented groups in Rust and help make our ecosystem more globally inclusive. One of the central goals of the Rust Foundation board's subcommittee is to analyze information about our community to find out what gaps exist, so this information is a helpful place to start. This topic deserves much more depth than is possible here, but readers can expect more on the subject in the future.

Rust usage

In 2023, we saw a slight jump in the number of respondents that self-identify as a Rust user, from 91% in 2022 to 93% in 2023.

[PNG] [SVG]

Of those who used Rust in 2023, 49% did so on a daily (or nearly daily) basis — a small increase of 2pp from the previous year.

[PNG] [SVG]

31% of those who did not identify as Rust users cited the perception of difficulty as the primary reason for not having used it, with 67% reporting that they simply haven’t had the chance to prioritize learning Rust yet, which was once again the most common reason.

[PNG] [SVG] [Wordcloud of open answers]

Of the former Rust users who participated in the 2023 survey, 46% cited factors outside their control (a decrease of 1pp from 2022), 31% stopped using Rust due to preferring another language (an increase of 9pp from 2022), and 24% cited difficulty as the primary reason for giving up (a decrease of 6pp from 2022).

[PNG] [SVG] [Wordcloud of open answers]

Rust expertise has generally increased amongst our respondents over the past year! 23% can write (only) simple programs in Rust (a decrease of 6pp from 2022), 28% can write production-ready code (an increase of 1pp), and 47% consider themselves productive using Rust — up from 42% in 2022. While the survey is just one tool to measure the changes in Rust expertise overall, these numbers are heartening as they represent knowledge growth for many Rustaceans returning to the survey year over year.

[PNG] [SVG]

In terms of operating systems used by Rustaceans, the situation is very similar to the results from 2022, with Linux being the most popular choice of Rust users, followed by macOS and Windows, which have a very similar share of usage.

[PNG] [SVG] [Wordcloud of open answers]

Rust programmers target a diverse set of platforms with their Rust programs, even though the most popular target by far is still a Linux machine. We can see a slight uptick in users targeting WebAssembly, embedded and mobile platforms, which speaks to the versatility of Rust.

[PNG] [SVG] [Wordcloud of open answers]

We cannot of course forget the favourite topic of many programmers: which IDE (developer environment) do they use. Visual Studio Code still seems to be the most popular option, with RustRover (which was released last year) also gaining some traction.

[PNG] [SVG] [Wordcloud of open answers]

You can also take a look at the linked wordcloud that summarizes open answers to this question (the "Other" category), to see what other editors are also popular.

Rust at Work

We were excited to see a continued upward year-over-year trend of Rust usage at work. 34% of 2023 survey respondents use Rust in the majority of their coding at work — an increase of 5pp from 2022. Of this group, 39% work for organizations that make non-trivial use of Rust.

[PNG] [SVG]

Once again, the top reason employers of our survey respondents invested in Rust was the ability to build relatively correct and bug-free software at 86% — a 4pp increase from 2022 responses. The second most popular reason was Rust’s performance characteristics at 83%.

[PNG] [SVG]

We were also pleased to see an increase in the number of people who reported that Rust helped their company achieve its goals at 79% — an increase of 7pp from 2022. 77% of respondents reported that their organization is likely to use Rust again in the future — an increase of 3pp from the previous year. Interestingly, we saw a decrease in the number of people who reported that using Rust has been challenging for their organization to use: 34% in 2023 and 39% in 2022. We also saw an increase of respondents reporting that Rust has been worth the cost of adoption: 64% in 2023 and 60% in 2022.

[PNG] [SVG]

There are many factors playing into this, but the growing awareness around Rust has likely resulted in the proliferation of resources, allowing new teams using Rust to be better supported.

In terms of technology domains, it seems that Rust is especially popular for creating server backends, web and networking services and cloud technologies.

[PNG] [SVG] [Wordcloud of open answers]

You can scroll the chart to the right to see more domains. Note that the Database implementation and Computer Games domains were not offered as closed answers in the 2022 survey (they were merely submitted as open answers), which explains the large jump.

It is exciting to see the continued growth of professional Rust usage and the confidence so many users feel in its performance, control, security and safety, enjoyability, and more!

Challenges

As always, one of the main goals of the State of Rust survey is to shed light on challenges, concerns, and priorities on Rustaceans’ minds over the past year.

Of those respondents who shared their main worries for the future of Rust (9,374), the majority were concerned about Rust becoming too complex at 43% — a 5pp increase from 2022. 42% of respondents were concerned about a low level of Rust usage in the tech industry. 32% of respondents in 2023 were most concerned about Rust developers and maintainers not being properly supported — a 6pp increase from 2022.

We saw a notable decrease in respondents who were not at all concerned about the future of Rust, 18% in 2023 and 30% in 2022.

Thank you to all participants for your candid feedback which will go a long way toward improving Rust for everyone.

[PNG] [SVG] [Wordcloud of open answers]

Closed answers marked with N/A were not present in the previous (2022) version of the survey.

In terms of features that Rust users want to be implemented, stabilized or improved, the most desired improvements are in the areas of traits (trait aliases, associated type defaults, etc.), const execution (generic const expressions, const trait methods, etc.) and async (async closures, coroutines).

[PNG] [SVG] [Wordcloud of open answers]

It is interesting that 20% of respondents answered that they wish Rust to slow down the development of new features, which likely goes hand in hand with the previously mentioned worry that Rust becomes too complex.

The areas of Rust that Rustaceans seem to struggle with the most seem to be asynchronous Rust, the traits and generics system and also the borrow checker.

[PNG] [SVG] [Wordcloud of open answers]

Respondents of the survey want the Rust maintainers to mainly prioritize fixing compiler bugs (68%), improving the runtime performance of Rust programs (57%) and also improving compile times (45%).

[PNG] [SVG]

Same as in recent years, respondents noted that compilation time is one of the most important areas that should be improved. However, it is interesting to note that respondents also seem to consider runtime performance to be more important than compile times.

Looking ahead

Each year, the results of the State of Rust survey help reveal the areas that need improvement in many areas across the Rust Project and ecosystem, as well as the aspects that are working well for our community.

We are aware that the survey has contained some confusing questions, and we will try to improve upon that in the next year's survey. If you have any suggestions for the Rust Annual survey, please let us know!

We are immensely grateful to those who participated in the 2023 State of Rust Survey and facilitated its creation. While there are always challenges associated with developing and maintaining a programming language, this year we were pleased to see a high level of survey participation and candid feedback that will truly help us make Rust work better for everyone.

If you’d like to dig into more details, we recommend you to browse through the full survey report.

Continue Reading…

Rust Blog

Announcing Rust 1.76.0

The Rust team is happy to announce a new version of Rust, 1.76.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.76.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.76.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.76.0 stable

This release is relatively minor, but as always, even incremental improvements lead to a greater whole. A few of those changes are highlighted in this post, and others may yet fill more niche needs.

ABI compatibility updates

A new ABI Compatibility section in the function pointer documentation describes what it means for function signatures to be ABI-compatible. A large part of that is the compatibility of argument types and return types, with a list of those that are currently considered compatible in Rust. For the most part, this documentation is not adding any new guarantees, only describing the existing state of compatibility.

The one new addition is that it is now guaranteed that char and u32 are ABI compatible. They have always had the same size and alignment, but now they are considered equivalent even in function call ABI, consistent with the documentation above.

Type names from references

For debugging purposes, any::type_name::() has been available since Rust 1.38 to return a string description of the type T, but that requires an explicit type parameter. It is not always easy to specify that type, especially for unnameable types like closures or for opaque return types. The new type_name_of_val(&T) offers a way to get a descriptive name from any reference to a type.

fn get_iter() -> impl Iterator<Item = i32> {
    [1, 2, 3].into_iter()
}

fn main() {
    let iter = get_iter();
    let iter_name = std::any::type_name_of_val(&iter);
    let sum: i32 = iter.sum();
    println!("The sum of the `{iter_name}` is {sum}.");
}

This currently prints:

The sum of the `core::array::iter::IntoIter<i32, 3>` is 6.

Stabilized APIs

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.76.0

Many people came together to create Rust 1.76.0. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

crates.io: API status code changes

Cargo and crates.io were developed in the rush leading up to the Rust 1.0 release to fill the needs for a tool to manage dependencies and a registry that people could use to share code. This rapid work resulted in these tools being connected with an API that initially didn't return the correct HTTP response status codes. After the Rust 1.0 release, Rust's stability guarantees around backward compatibility made this non-trivial to fix, as we wanted older versions of Cargo to continue working with the current crates.io API.

When an old version of Cargo receives a non-"200 OK" response, it displays the raw JSON body like this:

error: failed to get a 200 OK response, got 400
headers:
    HTTP/1.1 400 Bad Request
    Content-Type: application/json; charset=utf-8
    Content-Length: 171

body:
{"errors":[{"detail":"missing or empty metadata fields: description, license. Please see https://doc.rust-lang.org/cargo/reference/manifest.html for how to upload metadata"}]}

This was improved in pull request #6771, which was released in Cargo 1.34 (mid-2019). Since then, Cargo has supported receiving 4xx and 5xx status codes too and extracts the error message from the JSON response, if available.

On 2024-03-04 we will switch the API from returning "200 OK" status codes for errors to the new 4xx/5xx behavior. Cargo 1.33 and below will keep working after this change, but will show the raw JSON body instead of a nicely formatted error message. We feel confident that this degraded error message display will not affect very many users. According to the crates.io request logs only very few requests are made by Cargo 1.33 and older versions.

This is the list of API endpoints that will be affected by this change:

  • GET /api/v1/crates
  • PUT /api/v1/crates/new
  • PUT /api/v1/crates/:crate/:version/yank
  • DELETE /api/v1/crates/:crate/:version/unyank
  • GET /api/v1/crates/:crate/owners
  • PUT /api/v1/crates/:crate/owners
  • DELETE /api/v1/crates/:crate/owners

All other endpoints have already been using regular HTTP status codes for some time.

If you are still using Cargo 1.33 or older, we recommend upgrading to a newer version to get the improved error messages and all the other nice things that the Cargo team has built since then.

Continue Reading…

Rust Blog

Announcing Rust 1.75.0

The Rust team is happy to announce a new version of Rust, 1.75.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.75.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.75.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.75.0 stable

async fn and return-position impl Trait in traits

As announcedlast week, Rust 1.75 supports use of async fn and -> impl Trait in traits. However, this initial release comes with some limitations that are described in the announcement post.

It's expected that these limitations will be lifted in future releases.

Pointer byte offset APIs

Raw pointers (*const T and *mut T) used to primarily support operations operating in units of T. For example, <*const T>::add(1) would addsize_of::<T>() bytes to the pointer's address. In some cases, working with byte offsets is more convenient, and these new APIs avoid requiring callers to cast to *const u8/*mut u8 first.

Code layout optimizations for rustc

The Rust compiler continues to get faster, with this release including the application ofBOLT to our binary releases, bringing a 2% mean wall time improvements on our benchmarks. This tool optimizes the layout of the librustc_driver.so library containing most of the rustc code, allowing for better cache utilization.

We are also now building rustc with -Ccodegen-units=1, which provides more opportunity for optimizations in LLVM. This optimization brought a separate 1.5% wall time mean win to our benchmarks.

In this release these optimizations are limited to x86_64-unknown-linux-gnucompilers, but we expect to expand that over time to include more platforms.

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.75.0

Many people came together to create Rust 1.75.0. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

Announcing `async fn` and return-position `impl Trait` in traits

The Rust Async Working Group is excited to announce major progress towards our goal of enabling the use of async fn in traits. Rust 1.75, which hits stable next week, will include support for both -> impl Trait notation and async fn in traits.

This is a big milestone, and we know many users will be itching to try these out in their own code. However, we are still missing some important features that many users need. Read on for recommendations on when and how to use the stabilized features.

What's stabilizing

Ever since the stabilization of RFC #1522 in Rust 1.26, Rust has allowed users to write impl Trait as the return type of functions (often called "RPIT"). This means that the function returns "some type that implements Trait". This is commonly used to return closures, iterators, and other types that are complex or impossible to write explicitly.

/// Given a list of players, return an iterator
/// over their names.
fn player_names(
    players: &[Player]
) -> impl Iterator<Item = &String> {
    players
        .iter()
        .map(|p| &p.name)
}

Starting in Rust 1.75, you can use return-position impl Trait in trait (RPITIT) definitions and in trait impls. For example, you could use this to write a trait method that returns an iterator:

trait Container {
    fn items(&self) -> impl Iterator<Item = Widget>;
}

impl Container for MyContainer {
    fn items(&self) -> impl Iterator<Item = Widget> {
        self.items.iter().cloned()
    }
}

So what does all of this have to do with async functions? Well, async functions are "just sugar" for functions that return -> impl Future. Since these are now permitted in traits, we also permit you to write traits that use async fn.

trait HttpService {
    async fn fetch(&self, url: Url) -> HtmlBody;
//  ^^^^^^^^ desugars to:
//  fn fetch(&self, url: Url) -> impl Future<Output = HtmlBody>;
}

Where the gaps lie

-> impl Trait in public traits

The use of -> impl Trait is still discouraged for general use in public traits and APIs for the reason that users can't put additional bounds on the return type. For example, there is no way to write this function in a way that is generic over the Container trait:

fn print_in_reverse(container: impl Container) {
    for item in container.items().rev() {
        // ERROR:                 ^^^
        // the trait `DoubleEndedIterator`
        // is not implemented for
        // `impl Iterator<Item = Widget>`
        eprintln!("{item}");
    }
}

Even though some implementations might return an iterator that implements DoubleEndedIterator, there is no way for generic code to take advantage of this without defining another trait. In the future we plan to add a solution for this. For now, -> impl Trait is best used in internal traits or when you're confident your users won't need additional bounds. Otherwise you should consider using an associated type.1

async fn in public traits

Since async fn desugars to -> impl Future, the same limitations apply. In fact, if you use bare async fn in a public trait today, you'll see a warning.

warning: use of `async fn` in public traits is discouraged as auto trait bounds cannot be specified
 --> src/lib.rs:7:5
  |
7 |     async fn fetch(&self, url: Url) -> HtmlBody;
  |     ^^^^^
  |
help: you can desugar to a normal `fn` that returns `impl Future` and add any desired bounds such as `Send`, but these cannot be relaxed without a breaking API change
  |
7 -     async fn fetch(&self, url: Url) -> HtmlBody;
7 +     fn fetch(&self, url: Url) -> impl std::future::Future<Output = HtmlBody> + Send;
  |

Of particular interest to users of async are Send bounds on the returned future. Since users cannot add bounds later, the error message is saying that you as a trait author need to make a choice: Do you want your trait to work with multithreaded, work-stealing executors?

Thankfully, we have a solution that allows using async fn in public traits today! We recommend using the trait_variant::make proc macro to let your users choose. This proc macro is part of the trait-variant crate, published by the rust-lang org. Add it to your project with cargo add trait-variant, then use it like so:

#[trait_variant::make(HttpService: Send)]
pub trait LocalHttpService {
    async fn fetch(&self, url: Url) -> HtmlBody;
}

This creates two versions of your trait: LocalHttpService for single-threaded executors and HttpService for multithreaded work-stealing executors. Since we expect the latter to be used more commonly, it has the shorter name in this example. It has additional Send bounds:

pub trait HttpService: Send {
    fn fetch(
        &self,
        url: Url,
    ) -> impl Future<Output = HtmlBody> + Send;
}

This macro works for async because impl Future rarely requires additional bounds other than Send, so we can set our users up for success. See the FAQ below for an example of where this is needed.

Dynamic dispatch

Traits that use -> impl Trait and async fn are not object-safe, which means they lack support for dynamic dispatch. We plan to provide utilities that enable dynamic dispatch in an upcoming version of the trait-variant crate.

How we hope to improve in the future

In the future we would like to allow users to add their own bounds to impl Trait return types, which would make them more generally useful. It would also enable more advanced uses of async fn. The syntax might look something like this:

trait HttpService = LocalHttpService<fetch(): Send> + Send;

Since these aliases won't require any support on the part of the trait author, it will technically make the Send variants of async traits unnecessary. However, those variants will still be a nice convenience for users, so we expect that most crates will continue to provide them.

Of course, the goals of the Async Working Group don't stop with async fn in traits. We want to continue building features on top of it that enable more reliable and sophisticated use of async Rust, and we intend to publish a more extensive roadmap in the new year.

Frequently asked questions

Is it okay to use -> impl Trait in traits?

For private traits you can use -> impl Trait freely. For public traits, it's best to avoid them for now unless you can anticipate all the bounds your users might want (in which case you can use #[trait_variant::make], as we do for async). We expect to lift this restriction in the future.

Should I still use the #[async_trait] macro?

There are a couple of reasons you might need to continue using async-trait:

  • You want to support Rust versions older than 1.75.
  • You want dynamic dispatch.

As stated above, we hope to enable dynamic dispatch in a future version of the trait-variant crate.

Is it okay to use async fn in traits? What are the limitations?

Assuming you don't need to use #[async_trait] for one of the reasons stated above, it's totally fine to use regular async fn in traits. Just remember to use #[trait_variant::make] if you want to support multithreaded runtimes.

The biggest limitation is that a type must always decide if it implements the Send or non-Send version of a trait. It cannot implement the Send version conditionally on one of its generics. This can come up in the middleware pattern, for example, RequestLimitingService<T> that is HttpService if T: HttpService.

Why do I need #[trait_variant::make] and Send bounds?

In simple cases you may find that your trait appears to work fine with a multithreaded executor. There are some patterns that just won't work, however. Consider the following:

fn spawn_task(service: impl HttpService + 'static) {
    tokio::spawn(async move {
        let url = Url::from("https://rust-lang.org");
        let _body = service.fetch(url).await;
    });
}

Without Send bounds on our trait, this would fail to compile with the error: "future cannot be sent between threads safely". By creating a variant of your trait with Send bounds, you avoid sending your users into this trap.

Note that you won't see a warning if your trait is not public, because if you run into this problem you can always add the Send bounds yourself later.

For a more thorough explanation of the problem, see this blog post.2

Can I mix async fn and impl trait?

Yes, you can freely move between the async fn and -> impl Future spelling in your traits and impls. This is true even when one form has a Send bound.3 This makes the traits created by trait_variant nicer to use.

trait HttpService: Send {
    fn fetch(&self, url: Url)
    -> impl Future<Output = HtmlBody> + Send;
}

impl HttpService for MyService {
    async fn fetch(&self, url: Url) -> HtmlBody {
        // This works, as long as `do_fetch(): Send`!
        self.client.do_fetch(url).await.into_body()
    }
}

Why don't these signatures use impl Future + '_?

For -> impl Trait in traits we adopted the 2024 Capture Rules early. This means that the + '_ you often see today is unnecessary in traits, because the return type is already assumed to capture input lifetimes. In the 2024 edition this rule will apply to all function signatures. See the linked RFC for more.

Why am I getting a "refine" warning when I implement a trait with -> impl Trait?

If your impl signature includes more detailed information than the trait itself, you'll get a warning:

pub trait Foo {
    fn foo(self) -> impl Debug;
}

impl Foo for u32 {
    fn foo(self) -> String {
//                  ^^^^^^
//  warning: impl trait in impl method signature does not match trait method signature
        self.to_string()
    }
}

The reason is that you may be leaking more details of your implementation than you meant to. For instance, should the following code compile?

fn main() {
    // Did the implementer mean to allow
    // use of `Display`, or only `Debug` as
    // the trait says?
    println!("{}", 32.foo());
}

Thanks to refined trait implementations it does compile, but the compiler asks you to confirm your intent to refine the trait interface with #[allow(refining_impl_trait)] on the impl.

Conclusion

The Async Working Group is excited to end 2023 by announcing the completion of our primary goal for the year! Thank you to everyone who helpfully participated in design, implementation, and stabilization discussions. Thanks also to the users of async Rust who have given great feedback over the years. We're looking forward to seeing what you build, and to delivering continued improvements in the years to come.

  1. Note that associated types can only be used in cases where the type is nameable. This restriction will be lifted once impl_trait_in_assoc_type is stabilized.
  2. Note that in that blog post we originally said we would solve the Send bound problem before shipping async fn in traits, but we decided to cut that from the scope and ship the trait-variant crate instead.
  3. This works because of auto-trait leakage, which allows knowledge of auto traits to "leak" from an item whose signature does not specify them.

Continue Reading…

Rust Blog

Launching the 2023 State of Rust Survey

It’s time for the 2023 State of Rust Survey!

Since 2016, the Rust Project has collected valuable information and feedback from the Rust programming language community through our annual State of Rust Survey. This tool allows us to more deeply understand how the Rust Project is performing, how we can better serve the global Rust community, and who our community is composed of.

Like last year, the 2023 State of Rust Survey will likely take you between 10 and 25 minutes, and responses are anonymous. We will accept submissions until Monday, January 15th, 2024. Trends and key insights will be shared on blog.rust-lang.org as soon as possible in 2024.

We invite you to take this year’s survey whether you have just begun using Rust, you consider yourself an intermediate to advanced user, or you have not yet used Rust but intend to one day. Your responses will help us improve Rust over time by shedding light on gaps to fill in the community and development priorities, and more.

Once again, we are offering the State of Rust Survey in the following languages (if you speak multiple languages, please pick one). Language options are available on the main survey page:

  • English
  • Simplified Chinese
  • French
  • German
  • Japanese
  • Russian
  • Spanish

Please help us spread the word by sharing the survey link via your social media networks, at meetups, with colleagues, and in any other community that makes sense to you.

This survey would not be possible without the time, resources, and attention of members of the Survey Working Group, the Rust Foundation, and other collaborators. Thank you!

If you have any questions, please see our frequently asked questions.

We appreciate your participation!

Click here to read a summary of last year's survey findings.

Continue Reading…

Rust Blog

A Call for Proposals for the Rust 2024 Edition

The year 2024 is soon to be upon us, and as long-time Rust aficionados know, that means that a new Edition of Rust is on the horizon!

What is an Edition?

You may be aware that a new version of Rust is released every six weeks. New versions of the language can both add things as well as change things, but only in backwards-compatible ways, according to Rust's1.0 stability guarantee.

But does that mean that Rust can never make backwards-incompatible changes? Not quite! This is what an Edition is: Rust's mechanism for introducing backwards-incompatible changes in a backwards-compatible way. If that sounds like a contradiction, there are three key properties of Editions that preserve the stability guarantee:

  1. Editions are opt-in; crates only receive breaking changes if its authors explicitly ask for them.
  2. Crates that use older editions never get left behind; a crate written for the original Rust 2015 Edition is still supported by every Rust release, and can still make use of all the new goodies that accompany each new version, e.g. new library APIs, compiler optimizations, etc.
  3. An Edition never splits the library ecosystem; crates using new Editions can depend on crates using old Editions (and vice-versa!), so nobody ever has to worry about Edition-related incompatibility.

In order to keep churn to a minimum, a new Edition of Rust is only released once every three years. We've had the 2015 Edition, the 2018 Edition, the 2021 Edition, and soon, the 2024 Edition. And we could use your help!

A call for proposals for the Rust 2024 Edition

We know how much you love Rust, but let's be honest, no language is perfect, and Rust is no exception. So if you've got ideas for how Rust could be better if only that pesky stability guarantee weren't around, now's the time to share! Also note that potential Edition-related changes aren't just limited to the language itself: we'll also consider changes to both Cargo and rustfmt as well.

Please keep in mind that the following criteria determine the sort of changes we're looking for:

  1. A change must be possible to implement without violating the strict properties listed in the prior section. Specifically, the ability of crates to have cross-Edition dependencies imposes restrictions on changes that would take effect across crate boundaries, e.g. the signatures of public APIs. However, we will occasionally discover that an Edition-related changethat was once thought to be impossible actually turns out to be feasible, so hope is not lost if you're not sure if your idea meets this standard; propose it just to be safe!
  2. We strive to ensure that nearly all Edition-related changes can be applied to existing codebases automatically (via tools like cargo fix), in order to make upgrading to a new Edition as painless as possible.
  3. Even if an Edition could make any given change, that doesn't mean that it should. We're not looking for hugely-invasive changes or things that would fundamentally alter the character of the language. Please focus your proposals on things like fixing obvious bugs, changing annoying behavior, unblocking future feature development, and making the language easier and more consistent.

To spark your imagination, here's a real-world example. In the 2015 and 2018 Editions, iterating over a fixed-length array via [foo].into_iter()will yield references to the iterated elements; this is is surprising because, on other types, calling .into_iter() produces an iteratorthat yields owned values rather than references. This limitation existed because older versions of Rust lacked the ability to implement traits for all possible fixed-length arrays in a generic way. Once Rust finally became able to express this,all Editions at last gained the ability to iterate over owned values in fixed-length arrays; however, in the specific case of [foo].into_iter(), altering the existing behavior would have broken lots of code in the wild. Therefore, we used the 2021 Edition to fix this inconsistency for the specific case of [foo].into_iter(), allowing us to address this long-standing issue while preserving Rust's stability guarantees.

How to contribute

Just like other changes to Rust, Edition-related proposals follow the RFC process, as documented in the Rust RFCs repository. Please follow the process documented there, and please consider publicizing a draft of your RFC to collect preliminary feedback before officially submitting it, in order to expedite the RFC process once you've filed it for real! (And in addition to the venues mentioned in the prior link, please feel free to announce your pre-RFC to our Zulip channel.)

Please file your RFCs as soon as possible! Our goal is to release the 2024 Edition in the second half of 2024, which means we would like to get everything implemented(not only the features themselves, but also all the Edition-related migration tooling) by the end of May, which means that RFCs should be accepted by the end of February. And since RFCs take time to discuss and consider, we strongly encourage you to have your RFC filed by the end of December, or the first week of January at the very latest.

We hope to have periodic updates on the ongoing development of the 2024 Edition. In the meantime, if you have any questions or if you would like to help us make the new Edition a reality, we invite you to come chat in the #edition channel in the Rust Zulip.

Continue Reading…

Rust Blog

Cargo cache cleaning

Cargo has recently gained an unstable feature on the nightly channel (starting with nightly-2023-11-17) to perform automatic cleaning of cache content within Cargo's home directory. This post includes:

In short, we are asking people who use the nightly channel to enable this feature and report any issues you encounter on the Cargo issue tracker. To enable it, place the following in your Cargo config file (typically located in ~/.cargo/config.toml or %USERPROFILE%\.cargo\config.toml for Windows):

[unstable]
gc = true

Or set the CARGO_UNSTABLE_GC=true environment variable or use the -Zgc CLI flag to turn it on for individual commands.

We'd particularly like people who use unusual filesystems or environments to give it a try, since there are some parts of the implementation which are sensitive and need battle testing before we turn it on for everyone.

What is this feature?

Cargo keeps a variety of cached data within the Cargo home directory. This cache can grow unbounded and can get quite large (easily reaching many gigabytes). Community members have developed tools to manage this cache, such as cargo-cache, but cargo itself never exposed any ability to manage it.

This cache includes:

  • Registry index data, such as package dependency metadata from crates.io.
  • Compressed .crate files downloaded from a registry.
  • The uncompressed contents of those .crate files, which rustc uses to read the source and compile dependencies.
  • Clones of git repositories used by git dependencies.

The new garbage collection ("GC") feature adds tracking of this cache data so that cargo can automatically or manually remove unused files. It keeps an SQLite database which tracks the last time the various cache elements have been used. Every time you run a cargo command that reads or writes any of this cache data, it will update the database with a timestamp of when that data was last used.

What isn't yet included is cleaning of target directories, see Plan for the future.

Automatic cleaning

When you run cargo, once a day it will inspect the last-use cache tracker, and determine if any cache elements have not been used in a while. If they have not, then they will be automatically deleted. This happens with most commands that would normally perform significant work, like cargo build or cargo fetch.

The default is to delete data that can be locally recreated if it hasn't been used for 1 month, and to delete data that has to be re-downloaded after 3 months.

Automatic deletion is disabled if cargo is offline such as with --offline or --frozen to avoid deleting artifacts that may need to be used if you are offline for a long period of time.

The initial implementation has exposed a variety of configuration knobs to control how automatic cleaning works. However, it is unlikely we will expose too many low-level details when it is stabilized, so this may change in the future (see issue #13061). See the Automatic garbage collection section for more details on this configuration.

Manual cleaning

If you want to manually delete data from the cache, several options have been added under the cargo clean gc subcommand. This subcommand can be used to perform the normal automatic daily cleaning, or to specify different options on which data to remove. There are several options for specifying the age of data to delete (such as --max-download-age=3days) or specifying the maximum size of the cache (such as --max-download-size=1GiB). See the Manual garbage collection section or run cargo clean gc --help for more details on which options are supported.

This CLI design is only preliminary, and we are looking at determining what the final design will look like when it is stabilized, see issue #13060.

What to watch out for

After enabling the gc feature, just go about your normal business of using cargo. You should be able to observe the SQLite database stored in your cargo home directory at ~/.cargo/.global-cache.

After the first time you use cargo, it will populate the database tracking all the data that already exists in your cargo home directory. Then, after 1 month, cargo should start deleting old data, and after 3 months will delete even more data.

The end result is that after that period of time you should start to notice the home directory using less space overall.

You can also try out the cargo clean gc command and explore some of its options if you want to try to manually delete some data.

If you run into problems, you can disable the gc feature and cargo should return to its previous behavior. Please let us know on the issue tracker if this happens.

Request for feedback

We'd like to hear from you about your experience using this feature. Some of the things we are interested in are:

  • Have you run into any bugs, errors, issues, or confusing problems? Please file an issue over at https://github.com/rust-lang/cargo/issues/.
  • The first time that you use cargo with GC enabled, is there an unreasonably long delay? Cargo may need to scan your existing cache data once to detect what already exists from previous versions.
  • Do you notice unreasonable delays when it performs automatic cleaning once a day?
  • Do you have use cases where you need to do cleaning based on the size of the cache? If so, please share them at #13062.
  • If you think you would make use of manually deleting cache data, what are your use cases for doing that? Sharing them on #13060 about the CLI interface might help guide us on the overall design.
  • Does the default of deleting 3 month old data seem like a good balance for your use cases?

Or if you would prefer to share your experiences on Zulip, head over to the #t-cargo stream.

Design considerations and implementation details

(These sections are only for the intently curious among you.)

The implementation of this feature had to consider several constraints to try to ensure that it works in nearly all environments, and doesn't introduce a negative experience for users.

Performance

One big focus was to make sure that the performance of each invocation of cargo is not significantly impacted. Cargo needs to potentially save a large chunk of data every time it runs. The performance impact will heavily depend on the number of dependencies and your filesystem. Preliminary testing shows the impact can be anywhere from 0 to about 50ms.

In order to minimize the performance impact of actually deleting files, the automatic GC runs only once a day. This is intended to balance keeping the cache clean without impacting the performance of daily use.

Locking

Another big focus is dealing with cache locking. Previously, cargo had a single lock on the package cache, which cargo would hold while downloading registry data and performing dependency resolution. When cargo is actually running rustc, it previously did not hold a lock under the assumption that existing cache data will not be modified.

However, now that cargo can modify or delete existing cache data, it needs to be careful to coordinate with anything that might be reading from the cache, such as if multiple cargo commands are run simultaneously. To handle this, cargo now has two separate locks, which are used together to provide three separate locking states. There is a shared read lock, which allows multiple builds to run in parallel and read from the cache. There is a write lock held while downloading registry data, which is independent of the read lock which allows concurrent builds to still run while new packages are downloaded. The third state is a write lock that prevents either of the two previous locks from being held, and ensures exclusive access while cleaning the cache.

Versions of cargo before 1.75 don't know about the exclusive write lock. We are hoping that in practice it will be rare to concurrently run old and new cargo versions, and that it is unlikely that the automatic GC will need to delete data that is concurrently in use by an older version.

Error handling and filesystems

Because we do not want problems with GC from disrupting users, the implementation silently skips the GC if it is unable to acquire an exclusive lock on the package cache. Similarly, when cargo saves the timestamp data on every command, it will silently ignore errors if it is unable to open the database, such as if it is on a read-only filesystem, or it is unable to acquire a write lock. This may result in the last-use timestamps becoming stale, but hopefully this should not impact most usage scenarios. For locking, we are paying special attention to scenarios such as Docker container mounts and network filesystems with questionable locking support.

Backwards compatibility

Since the cache is used by any version of cargo, we have to pay close attention to forwards and backwards compatibility. We benefit from SQLite's particularly stable on-disk data format which has been stable since 2004. Cargo has support to do schema migrations within the database that stay backwards compatible.

Plan for the future

A major aspect of this endeavor is to gain experience with using SQLite in a wide variety of environments, with a plan to extend its usage in several other parts of cargo.

Registry index metadata

One place where we are looking to introduce SQLite is for the registry index cache. When cargo downloads registry index data, it stores it in a custom-designed binary file format to improve lookup performance. However, this index cache uses many small files, which may not perform well on some filesystems.

Additionally, the index cache grows without bound. Currently the automatic cache cleaning will only delete an entire index cache if the index itself hasn't been used, which is rarely the case for crates.io. We may also need to consider finer-grained timestamp tracking or some mechanism to periodically purge this data.

Target directory change tracking and cleaning

Another place we are looking to introduce SQLite is for managing the target directory. In cargo's target directory, cargo keeps track of information about each crate that has been built with what is called a fingerprint. These fingerprints help cargo know if it needs to recompile something. Each artifact is tracked with a set of 4 files, using a mixture of custom formats.

We are looking to replace this system with SQLite which will hopefully bring about several improvements. A major focus will be to provide cleaning of stale data in the target directory, which tends to use substantial amount of disk space. Additionally we are looking to implement other improvements, such as more accurate fingerprint tracking, provide information about why cargo thinks something needed to be recompiled, and to hopefully improve performance. This will be important for the script feature, which uses a global cache for build artifacts, and the future implementation of a globally-shared build cache.

Continue Reading…

Rust Blog

A Call for Proposals for the Rust 2024 Edition

The year 2024 is soon to be upon us, and as long-time Rust aficionados know, that means that a new Edition of Rust is on the horizon!

What is an Edition?

You may be aware that a new version of Rust is released every six weeks. New versions of the language can both add things as well as change things, but only in backwards-compatible ways, according to Rust's1.0 stability guarantee.

But does that mean that Rust can never make backwards-incompatible changes? Not quite! This is what an Edition is: Rust's mechanism for introducing backwards-incompatible changes in a backwards-compatible way. If that sounds like a contradiction, there are three key properties of Editions that preserve the stability guarantee:

  1. Editions are opt-in; crates only receive breaking changes if its authors explicitly ask for them.
  2. Crates that use older editions never get left behind; a crate written for the original Rust 2015 Edition is still supported by every Rust release, and can still make use of all the new goodies that accompany each new version, e.g. new library APIs, compiler optimizations, etc.
  3. An Edition never splits the library ecosystem; crates using new Editions can depend on crates using old Editions (and vice-versa!), so nobody ever has to worry about Edition-related incompatibility.

In order to keep churn to a minimum, a new Edition of Rust is only released once every three years. We've had the 2015 Edition, the 2018 Edition, the 2021 Edition, and soon, the 2024 Edition. And we could use your help!

A call for proposals for the Rust 2024 Edition

We know how much you love Rust, but let's be honest, no language is perfect, and Rust is no exception. So if you've got ideas for how Rust could be better if only that pesky stability guarantee weren't around, now's the time to share! Also note that potential Edition-related changes aren't just limited to the language itself: we'll also consider changes to both Cargo and rustfmt as well.

Please keep in mind that the following criteria determine the sort of changes we're looking for:

  1. A change must be possible to implement without violating the strict properties listed in the prior section. Specifically, the ability of crates to have cross-Edition dependencies imposes restrictions on changes that would take effect across crate boundaries, e.g. the signatures of public APIs. However, we will occasionally discover that an Edition-related changethat was once thought to be impossible actually turns out to be feasible, so hope is not lost if you're not sure if your idea meets this standard; propose it just to be safe!
  2. We strive to ensure that nearly all Edition-related changes can be applied to existing codebases automatically (via tools like cargo fix), in order to make upgrading to a new Edition as painless as possible.
  3. Even if an Edition could make any given change, that doesn't mean that it should. We're not looking for hugely-invasive changes or things that would fundamentally alter the character of the language. Please focus your proposals on things like fixing obvious bugs, changing annoying behavior, unblocking future feature development, and making the language easier and more consistent.

To spark your imagination, here's a real-world example. In the 2015 and 2018 Editions, iterating over a fixed-length array via [foo].into_iter()will yield references to the iterated elements; this is is surprising because, on other types, calling .into_iter() produces an iteratorthat yields owned values rather than references. This limitation existed because older versions of Rust lacked the ability to implement traits for all possible fixed-length arrays in a generic way. Once Rust finally became able to express this,all Editions at last gained the ability to iterate over owned values in fixed-length arrays; however, in the specific case of [foo].into_iter(), altering the existing behavior would have broken lots of code in the wild. Therefore, we used the 2021 Edition to fix this inconsistency for the specific case of [foo].into_iter(), allowing us to address this long-standing issue while preserving Rust's stability guarantees.

How to contribute

Just like other changes to Rust, Edition-related proposals follow the RFC process, as documented in the Rust RFCs repository. Please follow the process documented there, and please consider publicizing a draft of your RFC to collect preliminary feedback before officially submitting it, in order to expedite the RFC process once you've filed it for real! (And in addition to the venues mentioned in the prior link, please feel free to announce your pre-RFC to our Zulip channel.)

Please file your RFCs as soon as possible! Our goal is to release the 2024 Edition in the second half of 2024, which means we would like to get everything implemented(not only the features themselves, but also all the Edition-related migration tooling) by the end of May, which means that RFCs should be accepted by the end of February. And since RFCs take time to discuss and consider, we strongly encourage you to have your RFC filed by the end of December, or the first week of January at the very latest.

We hope to have periodic updates on the ongoing development of the 2024 Edition. In the meantime, if you have any questions or if you would like to help us make the new Edition a reality, we invite you to come chat in the #edition channel in the Rust Zulip.

Continue Reading…

Rust Blog

Announcing Rust 1.74.1

The Rust team has published a new point release of Rust, 1.74.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.74.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.74.1

1.74.1 resolves a few regressions introduced in 1.74.0:

Contributors to 1.74.1

Many people came together to create Rust 1.74.1. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

Announcing Rust 1.74.0

The Rust team is happy to announce a new version of Rust, 1.74.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.74.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.74.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.74.0 stable

Lint configuration through Cargo

As proposed in RFC 3389, the Cargo.toml manifest now supports a [lints] table to configure the reporting level (forbid, deny, warn, allow) for lints from the compiler and other tools. So rather than setting RUSTFLAGS with -F/-D/-W/-A, which would affect the entire build, or using crate-level attributes like:

#![forbid(unsafe_code)]
#![deny(clippy::enum_glob_use)]

You can now write those in your package manifest for Cargo to handle:

[lints.rust]
unsafe_code = "forbid"

[lints.clippy]
enum_glob_use = "deny"

These can also be configured in a [workspace.lints] table, then inherited by[lints] workspace = true like many other workspace settings. Cargo will also track changes to these settings when deciding which crates need to be rebuilt.

For more information, see the lints and workspace.lints sections of the Cargo reference manual.

Cargo Registry Authentication

Two more related Cargo features are included in this release: credential providers and authenticated private registries.

Credential providers allow configuration of how Cargo gets credentials for a registry. Built-in providers are included for OS-specific secure secret storage on Linux, macOS, and Windows. Additionally, custom providers can be written to support arbitrary methods of storing or generating tokens. Using a secure credential provider reduces risk of registry tokens leaking.

Registries can now optionally require authentication for all operations, not just publishing. This enables private Cargo registries to offer more secure hosting of crates. Use of private registries requires the configuration of a credential provider.

For further information, see theCargo docs.

Projections in opaque return types

If you have ever received the error that a "return type cannot contain a projection or Self that references lifetimes from a parent scope," you may now rest easy! The compiler now allows mentioning Self and associated types in opaque return types, like async fn and -> impl Trait. This is the kind of feature that gets Rust closer to how you might just_expect_ it to work, even if you have no idea about jargon like "projection".

This functionality had an unstable feature gate because its implementation originally didn't properly deal with captured lifetimes, and once that was fixed it was given time to make sure it was sound. For more technical details, see the stabilization pull request, which describes the following examples that are all now allowed:

struct Wrapper<'a, T>(&'a T);

// Opaque return types that mention `Self`:
impl Wrapper<'_, ()> {
    async fn async_fn() -> Self { /* ... */ }
    fn impl_trait() -> impl Iterator<Item = Self> { /* ... */ }
}

trait Trait<'a> {
    type Assoc;
    fn new() -> Self::Assoc;
}
impl Trait<'_> for () {
    type Assoc = ();
    fn new() {}
}

// Opaque return types that mention an associated type:
impl<'a, T: Trait<'a>> Wrapper<'a, T> {
    async fn mk_assoc() -> T::Assoc { /* ... */ }
    fn a_few_assocs() -> impl Iterator<Item = T::Assoc> { /* ... */ }
}

Stabilized APIs

These APIs are now stable in const contexts:

Compatibility notes

  • As previously announced, Rust 1.74 has increased its requirements on Apple platforms. The minimum versions are now:
    • macOS: 10.12 Sierra (First released 2016)
    • iOS: 10 (First released 2016)
    • tvOS: 10 (First released 2016)

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.74.0

Many people came together to create Rust 1.74.0. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

Faster compilation with the parallel front-end in nightly

The Rust compiler's front-end can now use parallel execution to significantly reduce compile times. To try it, run the nightly compiler with the -Z threads=8 option. This feature is currently experimental, and we aim to ship it in the stable compiler in 2024.

Keep reading to learn why a parallel front-end is needed and how it works, or just skip ahead to the How to use itsection.

Compile times and parallelism

Rust compile times are a perennial concern. The Compiler Performance Working Grouphas continually improved compiler performance for several years. For example, in the first 10 months of 2023, there were mean reductions in compile time of13%, in peak memory use of15%, and in binary size of7%, as measured by our performance suite.

However, at this point the compiler has been heavily optimized and new improvements are hard to find. There is no low-hanging fruit remaining.

But there is one piece of large but high-hanging fruit: parallelism. Current Rust compiler users benefit from two kinds of parallelism, and the newly parallel front-end adds a third kind.

Existing interprocess parallelism

When you compile a Rust program, Cargo launches multiple rustc processes, compiling multiple crates in parallel. This works well. Try compiling a large Rust program with the -j1 flag to disable this parallelization and it will take a lot longer than normal.

You can visualise this parallelism if you build with Cargo's--timings flag, which produces a chart showing how the crates are compiled. The following image shows the timeline when building ripgrep on a machine with 28 virtual cores.

cargo build --timings output when compiling ripgrep

There are 60 horizontal lines, each one representing a distinct process. Their durations range from a fraction of a second to multiple seconds. Most of them are rustc, and the few orange ones are build scripts. The first twenty run all start at the same time. This is possible because there are no dependencies between the relevant crates. But further down the graph, parallelism reduces as crate dependencies increase. Although the compiler can overlap compilation of dependent crates somewhat thanks to a feature called pipelined compilation, there is much less parallel execution happening towards the end of compilation, and this is typical for large Rust programs. Interprocess parallelism is not enough to take full advantage of many cores. For more speed, we need parallelism within each process.

Existing intraprocess parallelism: the back-end

The compiler is split into two halves: the front-end and the back-end.

The front-end does many things, including parsing, type checking, and borrow checking. Until this week, it could not use parallel execution.

The back-end performs code generation. It generates code in chunks called "codegen units" and then LLVM processes these in parallel. This is a form of coarse-grained parallelism.

We can visualize the difference between the serial front-end and the parallel back-end. The following image shows the output of a profiler calledSamply measuring rustc as it does a release build of the final crate in Cargo. The image is superimposed with markers that indicate front-end and back-end execution.

Samply output when compiling Cargo, serial

Each horizontal line represents a thread. The main thread is labelled "rustc" and is shown at the bottom. It is busy for most of the execution. The other 16 threads are LLVM threads, labelled "opt cgu.00" through to "opt cgu.15". There are 16 threads because 16 is the default number of codegen units for a release build.

There are several things worth noting.

  • Front-end execution takes 10.2 seconds.
  • Back-end execution occurs takes 6.2 seconds, and the LLVM threads are running for 5.9 seconds of that.
  • The parallel code generation is highly effective. Imagine if all those LLVM executed one after another!
  • Even though there are 16 LLVM threads, at no point are all 16 executing at the same time, despite this being run on a machine with 28 cores. (The peak is 14 or 15.) This is because the main thread translates its internal code representation (MIR) to LLVM's code representation (LLVM IR) in serial. This takes a brief period for each codegen unit, and explains the staircase shape on the left-hand side of the code generation threads. There is some room for improvement here.
  • The front-end is entirely serial. There is a lot of room for improvement here.

New intraprocess parallelism: the front-end

The front-end is now capable of parallel execution. It usesRayon to perform compilation tasks using fine-grained parallelism. Many data structures are synchronized by mutexes and read-write locks, atomic types are used where appropriate, and many front-end operations are made parallel. The addition of parallelism was done by modifying a relatively small number of key points in the code. The vast majority of the front-end code did not need to be changed.

When the parallel front-end is enabled and configured to use eight threads, we get the following Samply profile when compiling the same example as before.

Samply output when compiling Cargo, parallel

Again, there are several things worth nothing.

  • Front-end execution takes 5.9 seconds (down from 10.2 seconds).
  • Back-end execution takes 5.3 seconds (down from 6.2 seconds), and the LLVM threads are running for 4.9 seconds of that (down from 5.9 seconds).
  • There are seven additional threads labelled "rustc" operating in the front-end. The reduced front-end time shows they are reasonably effective, but the thread utilization is patchy, with the eight threads all having periods of inactivity. There is room for significant improvement here.
  • Eight of the LLVM threads start at the same time. This is because the eight "rustc" threads create the LLVM IR for eight codegen units in parallel. (For seven of those threads that is the only work they do in the back-end.) After that, the staircase effect returns because only one "rustc" thread does LLVM IR generation while seven or more LLVM threads are active. If the number of threads used by the front-end was changed to 16 the staircase shape would disappear entirely, though in this case the final execution time would barely change.

Putting it all together

Rust compilation has long benefited from interprocess parallelism, via Cargo, and from intraprocess parallelism in the back-end. It can now also benefit from intraprocess parallelism in the front-end.

You might wonder how interprocess parallelism and intraprocess parallelism interact. If we have 20 parallel rustc invocations and each one can have up to 16 threads running, could we end up with hundreds of threads on a machine with only tens of cores, resulting in inefficient execution as the OS tries its best to schedule them?

Fortunately no. The compiler uses the jobserver protocolto limit the number of threads it creates. If a lot of interprocess parallelism is occuring, intraprocess parallelism will be limited appropriately, and the number of threads will not exceed the number of cores.

How to use it

The nightly compiler is now shipping with the parallel front-end enabled. However, by default it runs in single-threaded mode and won't reduce compile times.

Keen users can opt into multi-threaded mode with the -Z threads option. For example:

$ RUSTFLAGS="-Z threads=8" cargo build --release

Alternatively, to opt in from aconfig.toml file (for one or more projects), add these lines:

[build]
rustflags = ["-Z", "threads=8"]

It may be surprising that single-threaded mode is the default. Why parallelize the front-end and then run it in single-threaded mode? The answer is simple: caution. This is a big change! The parallel front-end has a lot of new code. Single-threaded mode exercises most of the new code, but excludes the possibility of threading bugs such as deadlocks that can affect multi-threaded mode. Even in Rust, parallel programs are harder to write correctly than serial programs. For this reason the parallel front-end also won't be shipped in beta or stable releases for some time.

Performance effects

When the parallel front-end is run in single-threaded mode, compilation times are typically 0% to 2% slower than with the serial front-end. This should be barely noticeable.

When the parallel front-end is run in multi-threaded mode with -Z threads=8, our measurements on real-world code show that compile times can be reduced by up to 50%, though the effects vary widely and depend on the characteristics of the code and its build configuration. For example, dev builds are likely to see bigger improvements than release builds because release builds usually spend more time doing optimizations in the back-end. A small number of cases compile more slowly in multi-threaded mode than single-threaded mode. These are mostly tiny programs that already compile quickly.

We recommend eight threads because this is the configuration we have tested the most and it is known to give good results. Values lower than eight will see smaller benefits. Values greater than eight will give diminishing returns and may even give worse performance.

If a 50% improvement seems low when going from one to eight threads, recall from the explanation above that the front-end only accounts for part of compile times, and the back-end is already parallel. You can't beat Amdahl's Law.

Memory usage can increase significantly in multi-threaded mode. We have seen increases of up to 35%. This is unsurprising given that various parts of compilation, each of which requires a certain amount of memory, are now executing in parallel.

Correctness

Reliability in single-threaded mode should be high.

In multi-threaded mode there are some known bugs, including deadlocks. If compilation hangs, you have probably hit one of them.

Feedback

If you have any problems with the parallel front-end, please check the issues marked with the "WG-compiler-parallel" label. If your problem does not match any of the existing issues, please file a new issue.

For more general feedback, please start a discussion on the wg-parallel-rustc Zulip channel. We are particularly interested to hear the performance effects on the code you care about.

Future work

We are working to improve the performance of the parallel front-end. As the graphs above showed, there is room to improve the utilization of the threads in the front-end. We are also ironing out the remaining bugs in multi-threaded mode.

We aim to stabilize the -Z threads option and ship the parallel front-end running by default in multi-threaded mode on stable releases in 2024.

Acknowledgments

The parallel front-end has been under development for a long time. It was started by @Zoxc, who also did most of the work for several years. After a period of inactivity, the project was revived this year by @SparrowLii, who led the effort to get it shipped. Other members of the Parallel Rustc Working Group have also been involved with reviews and other activities. Many thanks to everyone involved.

Continue Reading…

Rust Blog

crates.io: Dropping support for non-canonical downloads

TL;DR

  • We want to improve the reliability and performance of crate downloads.
  • "Non-canonical downloads" (that use URLs containing hyphens or underscores where the crate published uses the opposite) are blocking these plans.
  • On 2023-11-20 support for "non-canonical downloads" will be disabled.
  • cargo users are unaffected.

What are "non-canonical downloads"?

The "non-canonical downloads" feature allows everyone to download the serde_derive crate from https://crates.io/api/v1/crates/serde%5Fderive/1.0.189/download, but also from https://crates.io/api/v1/crates/SERDE-derive/1.0.189/download, where the underscore was replaced with a hyphen (crates.io normalizes underscores and hyphens to be the same for uniqueness purposes, so it isn't possible to publish a crate named serde-derive because serde_derive exists) and parts of the crate name are using uppercase characters. The same also works vice versa, if the canonical crate name uses hyphens and the download URL uses underscores instead. It even works with any other combination for crates that have multiple such characters (please don't mix them…!).

Why remove it?

Supporting such non-canonical download requests means that the crates.io server needs to perform a database lookup for every download request to figure out the canonical crate name. The canonical crate name is then used to construct a download URL and the client is HTTP-redirected to that URL.

While we have introduced a caching layer some time ago to address some of the performance concerns, having all download requests go through our backend servers has still started to become problematic and at the current rate of growth will not become any easier in the future.

Having to support "non-canonical downloads" however prevents us from using CDNs directly for the download requests, so if we can remove support for non-canonical download requests, it will unlock significant performance and reliability gains.

Who is using "non-canonical downloads"?

cargo always uses the canonical crate name from the package index to construct the corresponding download URLs. If support was removed for this on the crates.io side then cargo would still work exactly the same as before.

Looking at the crates.io request logs, the following user-agents are currently relying on "non-canonical downloads" support:

  • cargo-binstall/1.1.2
  • Faraday v0.17.6
  • Go-http-client/2.0
  • GNU Guile
  • python-requests/2.31.0

Three of these are just generic HTTP client libraries. GNU Guile is apparently a programming language, so most likely this is also a generic user-agent from a custom user program.

cargo-binstall is a tool enabling installation of binary artifacts of crates. The maintainer is already aware of the upcoming change and confirmed that more recent versions of cargo-binstall should not be affected by this change.

We recommend that any scripts relying on non-canonical downloads be adjusted to use the canonical names from the package index, the database dump, or the crates.io API instead. If you don't know which data source is best suited for you, we welcome you to take a look at the crates.io data access page.

What is the plan?

  1. Today: Announce the removal of support for non-canonical downloads on the main Rust blog.
  2. 2023-11-20: Disable support for non-canonical downloads and return a migration error message instead, to alert remaining users of this feature of the need to migrate. This still needs to put load on the application to detect a request is using a non-canonical download URL.
  3. 2023-12-18: Return a regular 404 error instead of the migration error message, allowing us to get rid of (parts of) the database query.

Note that we will still need the database query for download counting purposes for now. We have plans to remove this requirement as well, but those efforts are blocked by us still supporting non-canonical downloads.

If you want to follow the progress on implementing these changes or if you have comments you can subscribe to the corresponding tracking issue. Related discussions are also happening on the crates.io Zulip stream.

Continue Reading…

Rust Blog

A tale of broken badges and 23,000 features

Around mid-October of 2023 the crates.io team was notified by one of our users that a shields.io badge for their crate stopped working. The issue reporter was kind enough to already debug the problem and figured out that the API request that shields.io sends to crates.io was most likely the problem. Here is a quote from the original issue:

This crate makes heavy use of feature flags which bloat the response payload of the API.

Apparently the API response for this specific crate had broken the 20 MB mark and shields.io wasn't particularly happy with this. Interestingly, this crate only had 9 versions published at this point in time. But how do you get to 20 MB with only 9 published versions?

As the quote above already mentions, this crate is using features… a lot of features… almost 23,000! 😱

What crate needs that many features? Well, this crate provides SVG icons for Rust-based web applications… and it uses one feature per icon so that the payload size of the final WebAssembly bundle stays small.

At first glance there should be nothing wrong with this. This seems like a reasonable thing to do from a crate author perspective and neither cargo, nor crates.io, were showing any warnings about this. Unfortunately, some of the internals are not too happy about such a high number of features…

The first problem that was already identified by the crate author: the API responses from crates.io are getting veeeery large. Adding to the problem is the fact that the crates.io API currently does not paginate the list of published versions. Changing this is obviously a breaking change, so our team had been a bit reluctant to change the behavior of the API in that regard, though this situation has shown that we will likely have to tackle this problem in the near future.

The next problem is that the index file for this crate is also getting large. With 9 published versions it already contains 11 MB of data. And just like the crates.io API, there is currently no pagination built into the package index file format.

Now you may ask, why do the package index and cargo need to know about features? Well, the easy answer is: for dependency resolution. Features can enable optional dependencies, so when a dependency feature is used it might influence the dependency resolution. Our initial thought was that we could at least drop all empty feature declarations from the index file (e.g. foo = []), but the cargo team informed us that cargo relies on them being available there too, and so for backwards-compatibility reasons this is not an option.

On the bright side, most Rust users are on cargo versions these days that use the sparse package index by default, which only downloads index files for packages actually being used. In other words: only users of this icon crate need to pay the price for downloading all the metadata. On the flipside, this means users who are still using the git-based index are all paying for this one crate using 23,000 features.

So, where do we go from here? 🤔

While we believe that supporting such a high number of features is conceptually a valid request, with the current implementation details in crates.io and cargo we cannot support this. After analyzing all of these downstream effects from a single crate having that many features, we realized we need some form of restriction on crates.io to keep the system from falling apart.

Now comes the important part: on 2023-10-16 the crates.io team deployed a change limiting the number of features a crate can have to 300 for any new crates/versions being published.

… for now, or at least until we have found solutions for the above problems.

We are aware of a couple of crates that also have legitimate reasons for having more than 300 features, and we have granted them appropriate exceptions to this rule, but we would like to ask everyone to be mindful of these limitations of our current systems.

We also invite everyone to participate in finding solutions to the above problems. The best place to discuss ideas is the crates.io Zulip stream, and once an idea is a bit more fleshed out it will then be transformed into an RFC.

Finally, we would like to thank Charles Edward Gagnon for making us aware of this problem. We also want to reiterate that the author and their crate are not to blame for this. It is hard to know of these crates.io implementation details when developing crates, so if anything, the blame would be on us, the crates.io team, for not having limits on this earlier. Anyway, we have them now, and now you all know why! 👋

Continue Reading…

Rust Blog

Announcing the New Rust Project Directors

We are happy to announce that we have completed the process to elect new Project Directors.

The new Project Directors are:

They will join Ryan Levick and Mark Rousskov to make up the five members of the Rust Foundation Board of Directors who represent the Rust Project.

The board is made up of Project Directors, who come from and represent the Rust Project, and Member Directors, who represent the corporate members of the Rust Foundation.

Both of these director groups have equal voting power.

We look forward to working with and being represented by this new group of project directors.

We were fortunate to have a number of excellent candidates and this was a difficult decision. We wish to express our gratitude to all of the candidates who were considered for this role! We also extend our thanks to the project as a whole who participated by nominating candidates and providing additional feedback once the nominees were published. Finally, we want to share our appreciation for the Project Director Elections Subcommittee for working to design and facilitate running this election process.

This was a challenging decision for a number of reasons.

This was also our first time doing this process and we learned a lot to use to improve it going forward. The Project Director Elections Subcommittee will be following up with a retrospective outlining how well we achieved our goals with this process and making suggestions for future elections. We are expecting another election next year to start a rotating cadence of 2-year terms. Project governance is about iterating and refining over time.

Once again, we thank all who were involved in this process and we are excited to welcome our new Project Directors.

Continue Reading…

Rust Blog

Announcing Rust 1.73.0

The Rust team is happy to announce a new version of Rust, 1.73.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.73.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.73.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.73.0 stable

Cleaner panic messages

The output produced by the default panic handler has been changed to put the panic message on its own line instead of wrapping it in quotes. This can make panic messages easier to read, as shown in this example:

fn main() {
    let file = "ferris.txt";
    panic!("oh no! {file:?} not found!");
}

Output before Rust 1.73:

thread 'main' panicked at 'oh no! "ferris.txt" not found!', src/main.rs:3:5

Output starting in Rust 1.73:

thread 'main' panicked at src/main.rs:3:5:
oh no! "ferris.txt" not found!

This is especially useful when the message is long, contains nested quotes, or spans multiple lines.

Additionally, the panic messages produced by assert_eq and assert_ne have been modified, moving the custom message (the third argument) and removing some unnecessary punctuation, as shown below:

fn main() {
    assert_eq!("🦀", "🐟", "ferris is not a fish");
}

Output before Rust 1.73:

thread 'main' panicked at 'assertion failed: `(left == right)`
 left: `"🦀"`,
right: `"🐟"`: ferris is not a fish', src/main.rs:2:5

Output starting in Rust 1.73:

thread 'main' panicked at src/main.rs:2:5:
assertion `left == right` failed: ferris is not a fish
 left: "🦀"
right: "🐟"

Thread local initialization

As proposed in RFC 3184, LocalKey<Cell<T>> and LocalKey<RefCell<T>> can now be directly manipulated with get(), set(), take(), and replace() methods, rather than jumping through a with(|inner| ...) closure as needed for general LocalKey work. LocalKey<T> is the type of thread_local! statics.

The new methods make common code more concise and avoid running the extra initialization code for the default value specified in thread_local! for new threads.

thread_local! {
    static THINGS: Cell<Vec<i32>> = Cell::new(Vec::new());
}

fn f() {
    // before:
    THINGS.with(|i| i.set(vec![1, 2, 3]));
    // now:
    THINGS.set(vec![1, 2, 3]);

    // ...

    // before:
    let v = THINGS.with(|i| i.take());
    // now:
    let v: Vec<i32> = THINGS.take();
}

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.73.0

Many people came together to create Rust 1.73.0. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

Increasing the minimum supported Apple platform versions

As of Rust 1.74 (to be released on November 16th, 2023), the minimum version of Apple's platforms (iOS, macOS, and tvOS) that the Rust toolchain supports will be increased to newer minimums. These changes affect both the Rust compiler itself (rustc), other host tooling, and most importantly, the standard library and any binaries produced that use it. With these changes in place, any binaries produced will stop loading on older versions or exhibit other, unspecified, behavior.

The new minimum versions are now:

  • macOS: 10.12 Sierra (First released 2016)
  • iOS: 10 (First released 2016)
  • tvOS: 10 (First released 2016)

If your application does not target or support macOS 10.7-10.11 or iOS 7-9 already these changes most likely do not affect you.

Affected targets

The following contains each affected target, and the comprehensive effects on it:

  • x86_64-apple-darwin (Minimum OS raised)
  • aarch64-apple-ios (Minimum OS raised)
  • aarch64-apple-ios-sim (Minimum iOS and macOS version raised.)
  • x86_64-apple-ios (Minimum iOS and macOS version raised. This is also a simulator target.)
  • aarch64-apple-tvos (Minimum OS raised)
  • armv7-apple-ios (Target removed. The oldest iOS 10-compatible device uses ARMv7s.)
  • armv7s-apple-ios (Minimum OS raised)
  • i386-apple-ios (Minimum OS raised)
  • i686-apple-darwin (Minimum OS raised)
  • x86_64-apple-tvos (Minimum tvOS and macOS version raised. This is also a simulator target.)

From these changes, only one target has been removed entirely: armv7-apple-ios. It was a tier 3 target.

Note that Mac Catalyst and M1/M2 (aarch64) Mac targets are not affected, as their minimum OS version already has a higher baseline. Refer to the Platform Support Guide for more information.

Affected systems

These changes remove support for multiple older mobile devices (iDevices) and many more Mac systems. Thanks to @madsmtm for compiling the list.

As of this update, the following device models are no longer supported by the latest Rust toolchain:

iOS

  • iPhone 4S (Released in 2011)
  • iPad 2 (Released in 2011)
  • iPad, 3rd generation (Released in 2012)
  • iPad Mini, 1st generation (Released in 2012)
  • iPod Touch, 5th generation (Released in 2012)

macOS

A total of 27 Mac system models, released between 2007 and 2009, are no longer supported.

The affected systems are not comprehensively listed here, but external resources exist which contain lists of the exact models. They can be found from Apple and Yama-Mac, for example.

tvOS

The third generation AppleTV (released 2012-2013) is no longer supported.

Why are the requirements being changed?

Prior to now, Rust claimed support for very old Apple OS versions, but many never even received passive testing or support. This is a rough place to be for a toolchain, as it hinders opportunities for improvement in exchange for a support level many people, or everyone, will never utilize. For Apple's mobile platforms, many of the old versions are now even unable to receive new software due to App Store publishing restrictions.

Additionally, the past two years have clearly indicated that Apple, which has tight control over toolchains for these targets, is making it difficult-to-impossible to support them anymore. As of XCode 14, last year's toolchain release, building for many old OS versions became unsupported. XCode 15 continues this trend. After enough time, continuing to use an older toolchain can even lead to breaking build issues for others.

We want Rust to be a first-class option for developing software for and on Apple's platforms, but to continue this goal we have to set an easier, and more realistic compatibility baseline. The new requirements were determined after surveying what Apple and third-party statistics are available to us and picking a middle ground that balances compatibility with Rusts's needs and limitations.

Do I need to do anything?

If you or an application you develop are affected by this change, there are different options which may be helpful:

  • If possible, raise your minimum supported OS versions. All OS versions discussed in above have no support from the vendor. Not even security updates.
  • If you are running the Rust compiler or other host tools that were previously supported, consider cross-compiling from a newer host instead. You may also no longer be able to depend on the Rust standard library.
  • If none of these options work, you may need to freeze the version of the Rust toolchain your project builds with. Alternatively, you may be able to maintain a custom toolchain that supports your requirements for any sub-component of it.

If your project does not directly support a specific version, but instead depends on a default previously used by Rust, there are some steps you can take to help improve. For example, a number of crates in the ecosystem have hardcoded Rust's default support versions since they haven't changed for a long time:

  • If you use the cc crate to include build languages into your project, a future update will handle this transparently.
  • If you need a minimum OS version for anything else, crates should query the new rustc --print deployment-target option for a default, or user-set, value on toolchains using Rust 1.71 or newer going forward. Hardcoded defaults should only be used for older toolchains where this is unavailable.

Continue Reading…

Rust Blog

crates.io Policy Update RFC

Around the end of July the crates.io team opened an RFC to update the current crates.io usage policies. This policy update addresses operational concerns of the crates.io community service that have arisen since the last significant policy update in 2017, particularly related to name squatting and spam. The RFC has caused considerable discussion, and most of the suggested improvements have since been integrated into the proposal.

At the last team meeting the crates.io team decided to move the RFC forward and start the final comment period process.

We have been made aware by a couple of community members though that the RFC might not have been visible enough in the Rust community. We hope that this blog post changes that.

We invite you all to review the RFC and let us know if there are still any major concerns with these proposed policies.

Here is a quick TL;DR:

  • The current policies are quite vague on a couple of topics. The new policies are more explicit.
  • Reserving names is still allowed, but only to a certain degree and if you have a good reason for it.
  • The crates.io team will try to contact crate owners before taking any actions.

Finally, if you have any comments, please open threads on the RFC diff, instead of using the main comment box, to keep the discussion more structured. Thank you!

Continue Reading…

Rust Blog

Announcing Rust 1.72.1

The Rust team has published a new point release of Rust, 1.72.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.72.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.72.1

1.72.1 resolves a few regressions introduced in 1.72.0:

Contributors to 1.72.1

Many people came together to create Rust 1.72.1. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

Electing New Project Directors

Today we are launching the process to elect new Project Directors to the Rust Foundation Board of Directors. As we begin the process, we wanted to spend some time explaining the goals and procedures we will follow. We will summarize everything here, but if you would like to you can read the official process documentation.

We ask all project members to begin working with their Leadership Council representative to nominate potential Project Directors. See the Candidate Gathering section for more details. Nominations are due by September 15, 2023.

What are Project Directors?

The Rust Foundation Board of Directors has five seats reserved for Project Directors. These Project Directors serve as representatives of the Rust project itself on the Board. Like all Directors, the Project Directors are elected by the entity they represent, which in the case of the Rust Project means they are elected by the Rust Leadership Council. Project Directors serve for a term of two years and will have staggered terms. This year we will appoint two new directors and next year we will appoint three new directors.

The current project directors are Jane Losare-Lusby, Josh Stone, Mark Rousskov, Ryan Levick and Tyler Mandry. This year, Jane Losare-Lusby and Josh Stone will be rotating out of their roles as Project Directors, so the current elections are to fill their seats. We are grateful for the work the Jane and Josh have put in during their terms as Project Directors!

We want to make sure the Project Directors can effectively represent the project as a whole, so we are soliciting input from the whole project. The elections process will go through two phases: Candidate Gathering and Election. Read on for more detail about how these work.

Candidate Gathering

The first phase is beginning right now. In this phase, we are inviting the members of all of the top level Rust teams and their subteams to nominate people who will make good project directors. The goal is to bubble these up to the Council through each of the top-level teams. You should be hearing from your Council Representative soon with more details, but if not, feel free to reach out to them directly.

Each team is encouraged to suggest candidates. Since we are electing two new directors, it would be ideal for teams to nominate at least two candidates. Nominees can be anyone in the project and do not have to be a member of the team who nominates them.

The candidate gathering process will be open until September 15, at which point each team's Council Representative will share their team's nominations and reasoning with the whole Leadership Council. At this point, the Council will confirm with each of the nominees that they are willing to accept the nomination and fill the role of Project Director. Then the Council will publish the set of candidates.

This then starts a ten day period where members of the Rust Project are invited to share feedback on the nominees with the Council. This feedback can include reasons why a nominee would make a good project director, or concerns the Council should be aware of.

The Council will announce the set of nominees by September 19 and the ten day feedback period will last until September 29. Once this time has passed, we will move on to the election phase.

Election

The Council will meet during the week of October 1 to complete the election process. In this meeting we will discuss each candidate and once we have done this the facilitator will propose a set of two of them to be the new Project Directors. The facilitator puts this to a vote, and if the Council unanimously agrees with the proposed pair of candidates then the process is completed. Otherwise, we will give another opportunity for council members to express their objections and we will continue with another proposal. This process repeats until we find two nominees who the Council can unanimously consent to. The Council will then confirm these nominees through an official vote.

Once this is done, we will announce the new Project Directors. In addition, we will contact each of the nominees, including those who were not elected, to tell them a little bit more about what we saw as their strengths and opportunities for growth to help them serve better in similar roles in the future.

Timeline

This process will continue through all of September and into October. Below are the key dates:

  • Candidate nominations due: September 15
  • Candidates published: September 19
  • Feedback period: September 19 - 29
  • Election meeting: Week of October 1

After the election meeting happens, the Rust Leadership Council will announce the results and the new Project Directors will assume their responsibilities.

Acknowledgements

A number of people have been involved in designing and launching this election process and we wish to extend a heartfelt thanks to all of them! We'd especially like to thank the members of the Project Director Election Proposal Committee: Jane Losare-Lusby, Eric Holk, and Ryan Levick. Additionally, many members of the Rust Community have provided feedback and thoughtful discussions that led to significant improvements to the process. We are grateful for all of your contributions.

Continue Reading…

Rust Blog

Change in Guidance on Committing Lockfiles

For years, the Cargo team has encouraged Rust developers tocommit their Cargo.lock file for packages with binaries but not libraries. We now recommend peopledo what is best for their project. To help people make a decision, we do include some considerations and suggest committing Cargo.lock as a starting point in their decision making. To align with that starting point, cargo new will no longer ignoreCargo.lock for libraries as of nightly-2023-08-24. Regardless of what decision projects make, we encourage regulartesting against their latest dependencies.

Background

The old guidelines ensured libraries tested their latest dependencies which helped us keep quality high within Rust's package ecosystem by ensuring issues, especially backwards compatibility issues, were quickly found and addressed. While this extra testing was not exhaustive, We believe it helped foster a culture of quality in this nascent ecosystem.

This hasn't been without its downsides though. This has removed an important piece of history from code bases, making bisecting to find the root cause of a bug harder for maintainers. For contributors, especially newer ones, this is another potential source of confusion and frustration from an unreliable CI whenever a dependency is yanked or a new release contains a bug.

Why the change

A lot as changed for Rust since the guideline was written. Rust has shifted from being a language for early adopters to being more mainstream, and we need to be mindful of the on-boarding experience of these new-to-Rust developers. Also with this wider adoption, it isn't always practical to assume everyone is using the latest Rust release and the community has been working through how to manage support for minimum-supported Rust versions (MSRV). Part of this is maintaining an instance of your dependency tree that can build with your MSRV. A lockfile is an appropriate way to pin versions for your project so you can validate your MSRV but we found people were instead putting upperbounds on their version requirements due to the strength of our prior guideline despitelikely being a worse solution.

The wider software development ecosystem has also changed a lot in the intervening time. CI has become easier to setup and maintain. We also have products likeDependabotandRenovate. This has opened up options besides having version control ignore Cargo.lock to test newer dependencies. Developers could have a scheduled job that first runs cargo update. They could also have bots regularly update their Cargo.lock in PRs, ensuring they pass CI before being merged.

Since there isn't a universal answer to these situations, we felt it was best to leave the choice to developers and give them information they need in making a decision. For feedback on this policy change, see rust-lang/cargo#8728. You can also reach out the the Cargo team more generally onZulip.

Continue Reading…

Rust Blog

Announcing Rust 1.72.0

The Rust team is happy to announce a new version of Rust, 1.72.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.72.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.72.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.72.0 stable

Rust reports potentially useful cfg-disabled items in errors

You can conditionally enable Rust code using cfg, such as to provide certain functions only with certain crate features, or only on particular platforms. Previously, items disabled in this way would be effectively invisible to the compiler. Now, though, the compiler will remember the name and cfg conditions of those items, so it can report (for example) if a function you tried to call is unavailable because you need to enable a crate feature.

   Compiling my-project v0.1.0 (/tmp/my-project)
error[E0432]: unresolved import `rustix::io_uring`
   --> src/main.rs:1:5
    |
1   | use rustix::io_uring;
    |     ^^^^^^^^^^^^^^^^ no `io_uring` in the root
    |
note: found an item that was configured out
   --> /home/username/.cargo/registry/src/index.crates.io-6f17d22bba15001f/rustix-0.38.8/src/lib.rs:213:9
    |
213 | pub mod io_uring;
    |         ^^^^^^^^
    = note: the item is gated behind the `io_uring` feature

For more information about this error, try `rustc --explain E0432`.
error: could not compile `my-project` (bin "my-project") due to previous error

Const evaluation time is now unlimited

To prevent user-provided const evaluation from getting into a compile-time infinite loop or otherwise taking unbounded time at compile time, Rust previously limited the maximum number of statements run as part of any given constant evaluation. However, especially creative Rust code could hit these limits and produce a compiler error. Worse, whether code hit the limit could vary wildly based on libraries invoked by the user; if a library you invoked split a statement into two within one of its functions, your code could then fail to compile.

Now, you can do an unlimited amount of const evaluation at compile time. To avoid having long compilations without feedback, the compiler will always emit a message after your compile-time code has been running for a while, and repeat that message after a period that doubles each time. By default, the compiler will also emit a deny-by-default lint (const_eval_long_running) after a large number of steps to catch infinite loops, but you canallow(const_eval_long_running) to permit especially long const evaluation.

Uplifted lints from Clippy

Several lints from Clippy have been pulled into rustc:

  • clippy::undropped_manually_drops to undropped_manually_drops (deny)
    • ManuallyDrop does not drop its inner value, so calling std::mem::drop on it does nothing. Instead, the lint will suggest ManuallyDrop::into_inner first, or you may use the unsafe ManuallyDrop::drop to run the destructor in-place. This lint is denied by default.
  • clippy::invalid_utf8_in_unchecked to invalid_from_utf8_unchecked (deny) and invalid_from_utf8 (warn)
    • The first checks for calls to std::str::from_utf8_unchecked and std::str::from_utf8_unchecked_mut with an invalid UTF-8 literal, which violates their safety pre-conditions, resulting in undefined behavior. This lint is denied by default.
    • The second checks for calls to std::str::from_utf8 and std::str::from_utf8_mut with an invalid UTF-8 literal, which will always return an error. This lint is a warning by default.
  • clippy::cmp_nan to invalid_nan_comparisons (warn)
    • This checks for comparisons with f32::NAN or f64::NAN as one of the operands. NaN does not compare meaningfully to anything – not even itself – so those comparisons are always false. This lint is a warning by default, and will suggest calling the is_nan() method instead.
  • clippy::cast_ref_to_mut to invalid_reference_casting (allow)
    • This checks for casts of &T to &mut T without using interior mutability, which is immediate undefined behavior, even if the reference is unused. This lint is currently allowed by default due to potential false positives, but it is planned to be denied by default in 1.73 after implementation improvements.

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Future Windows compatibility

In a future release we're planning to increase the minimum supported Windows version to 10. The accepted proposal in compiler MCP 651 is that Rust 1.75 will be the last to officially support Windows 7, 8, and 8.1. When Rust 1.76 is released in February 2024, only Windows 10 and later will be supported as tier-1 targets. This change will apply both as a host compiler and as a compilation target.

Contributors to 1.72.0

Many people came together to create Rust 1.72.0. We couldn't have done it without all of you. Thanks!

Continue Reading…

Rust Blog

2022 Annual Rust Survey Results

Hello, Rustaceans!

For the 6th year in a row, the Rust Project conducted a survey on the Rust programming language, with participation from project maintainers, contributors, and those generally interested in the future of Rust. This edition of the annual State of Rust Survey opened for submissions on December 5 and ran until December 22, 2022.

First, we'd like to thank you for your patience on these long delayed results. We hope to identify a more expedient and sustainable process going forward so that the results come out more quickly and have even more actionable insights for the community.

The goal of this survey is always to give our wider community a chance to express their opinions about the language we all love and help shape its future. We’re grateful to those of you who took the time to share your voice on the state of Rust last year.

Before diving into a few highlights, we would like to thank everyone who was involved in creating the State of Rust survey with special acknowledgment to the translators whose work allowed us to offer the survey in English, Simplified Chinese, Traditional Chinese, French, German, Japanese, Korean, Portuguese, Russian, Spanish, and Ukrainian.

Participation

In 2022, we had 9,433 total survey completions and an increased survey completion rate of 82% vs. 76% in 2021. While the goal is always total survey completion for all participants, the survey requires time, energy, and focus – we consider this figure quite high and were pleased by the increase.

We also saw a significant increase in the number of people viewing but not participating in the survey (from 16,457 views in 2021 to 25,581 – a view increase of over 55%). While this is likely due to a number of different factors, we feel this information speaks to the rising interest in Rust and the growing general audience following its evolution.

In 2022, the survey had 11,482 responses, which is a slight decrease of 6.4% from 2021, however, the number of respondents that answered all survey questions has increased year over year. We were interested to see this slight decrease in responses, as this year’s survey was much shorter than in previous years – clearly, survey length is not the only factor driving participation.

Community

We were pleased to offer the survey in 11 languages – more than ever before, with the addition of a Ukrainian translation in 2022. 77% of respondents took this year’s survey in English, 5% in Chinese (simplified), 4% in German and French, 2% in Japanese, Spanish, and Russian, and 1% in Chinese (traditional), Korean, Portuguese, and Ukrainian. This is our lowest percentage of respondents taking the survey in English to date, which is an exciting indication of the growing global nature of our community!

The vast majority of our respondents reported being most comfortable communicating on technical topics in English (93%), followed by Chinese (7%).

Rust user respondents were asked which country they live in. The top 13 countries represented were as follows: United States (25%), Germany (12%), China (7%), United Kingdom (6%), France (5%), Canada (4%), Russia (4%), Japan (3%), Netherlands (3%), Sweden (2%), Australia (2%), Poland (2%), India (2%). Nearly 72.5% of respondents elected to answer this question.

While we see global access to Rust education as a critical goal for our community, we are proud to say that Rust was used all over the world in 2022!

Rust Usage

More people are using Rust than ever before! Over 90% of survey respondents identified as Rust users, and of those using Rust, 47% do so on a daily basis – an increase of 4% from the previous year.

30% of Rust user respondents can write simple programs in Rust, 27% can write production-ready code, and 42% consider themselves productive using Rust.

Of the former Rust users who completed the survey, 30% cited difficulty as the primary reason for giving up while nearly 47% cited factors outside of their control.

Graph: Why did you stop using Rust?

Similarly, 26% of those who did not identify as Rust users cited the perception of difficulty as the primary reason for not having used it, (with 62% reporting that they simply haven’t had the chance to prioritize learning Rust yet).Graph: Why don't you use Rust?

Rust Usage at Work

The growing maturation of Rust can be seen through the increased number of different organizations utilizing the language in 2022. In fact, 29.7% of respondents stated that they use Rust for the majority of their coding work at their workplace, which is a 51.8% increase compared to the previous year.

Graph: Are you using Rust at work?

There are numerous reasons why we are seeing increased use of Rust in professional environments. Top reasons cited for the use of Rust include the perceived ability to write "bug-free software" (86%), Rust's performance characteristics (84%), and Rust's security and safety guarantees (69%). We were also pleased to find that 76% of respondents continue to use Rust simply because they found it fun and enjoyable. (Respondents could select more than one option here, so the numbers don't add up to 100%.)

Graph: Why do you use Rust at work?

Of those respondents that used Rust at work, 72% reported that it helped their team achieve its goals (a 4% increase from the previous year) and 75% have plans to continue using it on their teams in the future.

But like any language being applied in the workplace, Rust’s learning curve is an important consideration; 39% of respondents using Rust in a professional capacity reported the process as “challenging” and 9% of respondents said that adopting Rust at work has “slowed down their team”. However, 60% of productive users felt Rust was worth the cost of adoption overall.Graph: Reasons for using Rust at work

It is exciting to see the continued growth of professional Rust usage and the confidence so many users feel in its performance, control, security and safety, enjoyability, and more!

Supporting the Future of Rust

A key goal of the State of Rust survey is to shed light on challenges, concerns, and priorities Rustaceans are currently sitting with.

Of those respondents who shared their main worries for the future of Rust, 26% have concerns that the developers and maintainers behind Rust are not properly supported – a decrease of more than 30% from the previous year’s findings. One area of focus in the future may be to see how the Project in conjunction with the Rust Foundation can continue to push that number towards 0%.

While 38% have concerns about Rust “becoming too complex”, only a small number of respondents were concerned about documentation, corporate oversight, or speed of evolution. 34% of respondents are not worried about the future of Rust at all.

This year’s survey reflects a 21% decrease in fears about Rust’s usage in the industry since the last survey. Faith in Rust’s staying power and general utility is clearly growing as more people find Rust and become lasting members of the community. As always, we are grateful for your honest feedback and dedication to improving this language for everyone.

Graph: Worries about the future of Rust

Another Round of Thanks

To quote an anonymous survey respondent, “Thanks for all your hard work making Rust awesome!” – Rust wouldn’t exist or continue to evolve for the better without the many Project members and the wider Rust community. Thank you to those who took the time to share their thoughts on the State of Rust in 2022!

Continue Reading…