This week in class we talked about code linting and code formatting using ESLint and Prettier. These kinds of tools automate a lot of the otherwise labour-intensive and easy-to-miss nitpicks reviewers often leave on pull requests, freeing up time to review much more important elements such as design and code structure. Many tech companies - Google, AirBnB, Microsoft to name a few - have their own code style guides, much in the same way that writing organizations (news organizations and scientific publishing, for example) have documents that outline how to maintain consistency in publications across many hundreds of outlets and thousands of writers, and the idea of a style guide is not new. However, the idea of automating this style checking to help developers maintain a consistent style has gained a lot more traction in recent years thanks in part to style and formatting tools coming standard and enabled by default in many modern programming languages.
Formatting as a First-Class Tool
I was first exposed to this idea of a universal formatter as part of a language’s toolchain when I started experimenting with Go.
Go’s gofmt
enforces a consistent style not just within one project, but across the entire ecosystem of Go code, making even foreign codebases more accessible because of their consistent style.
This consistency helps to reduce visual noise, making it easier to focus on what the code says rather than how it’s formatted.
This idea of a universal formatter appears in other languages like Rust’s own rustfmt
, though there are many other tools for enforcing style that predate these tools, such as rubocop for Ruby and astyle for C and C++.
In Go and Rust, these formatters increase productivity dramatically because as long as you write syntactically correct code, it can be the ugliest code you have ever written and by passing it through the formatter (and many editors and IDEs will even format the source for you when you save!) you can leave it up to the formatter to make the code pretty and readable. This means less time fiddling with alignment, worrying about indentation, and dithering over where to break your function call chain to best communicate your intent. It also means that wars over format like tabs versus spaces are dead; the formatter is the absolute arbitrator of the correct style, and because the formatter is consistent across the entire ecosystem there is a lot of pressure for users to conform instead of trying to tweak the formatter to their own personal preferences.
We still have an open issue on supernova
for implementing rustfmt
as part of our continuous integration process that we hope to close out with a pull request relatively soon so we can be sure all the code contributed to supernova
follows the same format as other projects.
Build Infrastructure Weirdness
Speaking of our continuous integration process, we ran into a very curious issue with 0xazure/supernova#24
this week where our builds started failing on the beta
release channel.
The build failure is caused by clippy
’s new_ret_no_self
lint which checks to ensure that, as a convention, new
methods are used to make a new instance of a type and return that instance as the return value.
In the issue, the build is only failing on the beta
release channel which was surprising because I was expecting this lint to have failed the build on the stable
release channel as well if it failed on beta
.
To further confuse the issue the build on nightly
, which per our configuration for supernova
is allowed to fail, was successful.
Digging into the problem some more, it looks like we are running into rust-lang-nursery/rust-clippy#3313
where the new_ret_no_self
lint is incorrectly triggering on a new
function that does return Self
, it’s just wrapped by a container type or tuple.
Indeed, we can see this from our implementation of Config::new
pub fn new(mut args: env::Args) -> Result<Config, &'static str> {
args.next();
let username = match args.next() {
None => return Err("No username provided"),
Some(arg) => arg,
};
let token = args.next();
Ok(Config { username, token })
}
which triggers the resulting lint failure
error: methods called `new` usually return `Self`
--> src/lib.rs:18:5
|
18 | / pub fn new(mut args: env::Args) -> Result<Config, &'static str> {
19 | | args.next();
20 | |
21 | | let username = match args.next() {
... |
28 | | Ok(Config { username, token })
29 | | }
| |_____^
|
= note: `-D clippy::new-ret-no-self` implied by `-D warnings`
= help: for further information visit https://rust-lang-nursery.github.io/rust-clippy/v0.0.212/index.html#new_ret_no_self
even though our return type is Result<Config, &'static str>
which unwraps to Config
on success and a static str
when there is an error in creating a new instance.
Investigating the Root Cause
An important part of build infrastructure is reproducibility: the ability to run a build with the same inputs and get the same outputs.
Without reproducibility we have flaky tests that no one wants to run and worse, no one trusts.
In the case of supernova
we have a build matrix to test on all three release channels: stable
, beta
, and nightly
, and we need to make sure testing on these channels happens in a predictable way.
It turns out the issue results from how clippy
is installed in each environment.
The recommended way to install clippy
is as a rustup
component using rustup component add clippy-preview
.
However, because clippy
is published as a component for rustup
rather than as some kind of version-pinned project dependency, this command does not install the same version of clippy
across all release channels.
This can be verified as follows:
$ cargo +stable clippy --version
clippy 0.0.212 (125907ad 2018-09-17)
$ cargo +beta clippy --version
clippy 0.0.212 (b1d03437 2018-10-19)
$ cargo +nightly clippy --version
clippy 0.0.212 (d8b42690 2018-11-04)
Note that while all of the build numbers are the same (v0.0.212
), the commit hashes and dates are all different.
It is important to verify that the tool(s) you’re using to test or lint your project are the same version in all of your environments, otherwise you’ll end up with confusing build failures like the one we did here.
In our case we are testing against beta
and nightly
to have an idea of future changes to the Rust compiler and any new lints that may get added in the future, so failures on anything but stable
are nice-to-have information rather than complete show-stoppers.
In other cases, or in different matrices, it’s even more important that the test environment is as consistent as possible and that the number of variables that are being changed are as small as possible to make tracing failures relatively simple.
Lint tools are great for catching low-hanging fruit in code review, but you can’t blindly trust them.
When there is a failure, it takes a person’s knowledge of the project to determine if the failure is legitimate or if there’s a problem in the tool or lint rule and to determine if it’s a problem with the submitted code, a problem with the tool configuration, or a false positive in the tool as in this case with clippy
’s new_ret_no_self
lint.
Fixing the Problem
After reaching out to some friends in the #rust
IRC channel, we decided to not run clippy
on the beta
toolchain to avoid more false positives like this in the future.
We are keeping clippy
enabled for the nightly
release channel because we are allowing nightly
to fail on Travis so while we will investigate those failures it will not block landing any pull requests if for some reason nightly
or clippy
on nightly
finds fault with our code.
I also recently filed 0xazure/supernova#32
to provide better visibility into the versions of tools we install to match how Travis prints out tooling versions for tools that come automatically installed with the Rust build environment.
This should help us track down version discrepancies and make trouble-shooting failures much quicker.
After landing the above fix (and an extra tweak so we only run Travis against the master
branch), our builds went fully green for the first time since we enabled Travis on the project!
Setting up automated builds can take a lot of up-front effort, but it pays big dividends as the project grows to ensure the quality of the software being written.
Now we just need some tests so we can verify our code is actually correct…