Why Use Rust in FRC

Posted in robotics; tagged with frc, rust

Warning: I threw this post together on during a flight to have it ready in time for FRC Houston Champs. Expect incompleteness and minor errors.

I have recently fallen in love with Rust to the point that I’ve written several libraries to let me use it in FRC. There are already plenty of blog posts about the benefits of Rust, so I’m going to try to motivate each example in an FRC context. I’ll try to keep examples understandable for people with little to no Rust experience, but reasonable programming experience. Though I am definitely part of the Rust Evangelism Strikeforce, I partake mostly ironically; use whatever language you want! If you love Haskell, create Haskell bindings to the FRC libraries! (I’ll help however I can!)

A book memeing people who shill rust, like myself

Data Types

Declarations

Data in rust comes in two forms, structs, which replace classes in C++/Java/etc, and Enums, which really have no counterpart in other systems languages.

struct Point {
    x: f64,
    y: f64,
}

This declares a struct with two members, x and y, which are of type f64, a 64-bit floating point number. All numeric types in rust are differentiated by length, there’s no ambiguity or hiding it. i32, a 32-bit signed integer, would replace int (at least, for Java and most C/C++ platforms).

Rust enums are unlike enums in most languages. For those familiar with the terms, they are a tagged union or sum type.

enum ElevatorState {
    GoToHeight(i32),
    Hold,
    MotivateThisExample {
        x: f64,
        y: f64,
    }
}

You can see that each variant, or “thing the enum can be”, can hold lots of different kinds of data. Hold on, various states, each with data? That sounds like a state machine!

Enums for State Machines

In 2018, 254’s elevator state machine looked like this:

package com.team254.frc2018.statemachines;

import com.team254.frc2018.planners.SuperstructureMotionPlanner;
import com.team254.frc2018.states.SuperstructureCommand;
import com.team254.frc2018.states.SuperstructureConstants;
import com.team254.frc2018.states.SuperstructureState;
import com.team254.frc2018.subsystems.Elevator;
import com.team254.lib.util.Util;

public class SuperstructureStateMachine {
    public enum WantedAction {
        IDLE,
        GO_TO_POSITION,
        WANT_MANUAL,
    }

    public enum SystemState {
        HOLDING_POSITION,
        MOVING_TO_POSITION,
        MANUAL
    }

    private SystemState mSystemState = SystemState.HOLDING_POSITION;

    private SuperstructureCommand mCommand = new SuperstructureCommand();
    private SuperstructureState mCommandedState = new SuperstructureState();
    private SuperstructureState mDesiredEndState = new SuperstructureState();

    private SuperstructureMotionPlanner mPlanner = new SuperstructureMotionPlanner();

    private double mScoringHeight = Elevator.kHomePositionInches;
    private double mScoringAngle = SuperstructureConstants.kStowedPositionAngle;

    private double mOpenLoopPower = 0.0;
    private boolean mManualWantsLowGear = false;
    private double mMaxHeight = SuperstructureConstants.kElevatorMaxHeight;

    // * Snipped methods *

    public synchronized SuperstructureCommand update(double timestamp, WantedAction wantedAction,
                                                     SuperstructureState currentState) {
        synchronized (SuperstructureStateMachine.this) {
            SystemState newState;

            // Handle state transitions
            switch (mSystemState) {
                case HOLDING_POSITION:
                    newState = handleHoldingPositionTransitions(wantedAction, currentState);
                    break;
                case MOVING_TO_POSITION:
                    newState = handleMovingToPositionTransitions(wantedAction, currentState);
                    break;
                case MANUAL:
                    newState = handleManualTransitions(wantedAction, currentState);
                    break;
                default:
                    System.out.println("Unexpected superstructure system state: " + mSystemState);
                    newState = mSystemState;
                    break;
            }

            if (newState != mSystemState) {
                System.out.println(timestamp + ": Superstructure changed state: " + mSystemState + " -> " + newState);
                mSystemState = newState;
            }

            // Pump elevator planner only if not jogging.
            if (!mCommand.openLoopElevator) {
                mCommandedState = mPlanner.update(currentState);
                mCommand.height = Math.min(mCommandedState.height, mMaxHeight);
                mCommand.wristAngle = mCommandedState.angle;
            }

            // Handle state outputs
            switch (mSystemState) {
                case HOLDING_POSITION:
                    getHoldingPositionCommandedState();
                    break;
                case MOVING_TO_POSITION:
                    getMovingToPositionCommandedState();
                    break;
                case MANUAL:
                    getManualCommandedState();
                    break;
                default:
                    System.out.println("Unexpected superstructure state output state: " + mSystemState);
                    break;
            }

            return mCommand;
        }
    }

    private SystemState handleDefaultTransitions(WantedAction wantedAction, SuperstructureState currentState) {
        if (wantedAction == WantedAction.GO_TO_POSITION) {
            if (scoringPositionChanged()) {
                updateMotionPlannerDesired(currentState);
            } else if (mPlanner.isFinished(currentState)) {
                return SystemState.HOLDING_POSITION;
            }
            return SystemState.MOVING_TO_POSITION;
        } else if (wantedAction == WantedAction.WANT_MANUAL) {
            return SystemState.MANUAL;
        } else {
            if (mSystemState == SystemState.MOVING_TO_POSITION && !mPlanner.isFinished(currentState)) {
                return SystemState.MOVING_TO_POSITION;
            } else {
                return SystemState.HOLDING_POSITION;
            }
        }
    }

    // HOLDING_POSITION
    private SystemState handleHoldingPositionTransitions(WantedAction wantedAction,
                                                         SuperstructureState currentState) {
        return handleDefaultTransitions(wantedAction, currentState);
    }

    private void getHoldingPositionCommandedState() {
        mCommand.elevatorLowGear = false;
        mCommand.openLoopElevator = false;
    }

    // MOVING_TO_POSITION
    private SystemState handleMovingToPositionTransitions(WantedAction wantedAction,
                                                          SuperstructureState currentState) {

        return handleDefaultTransitions(wantedAction, currentState);
    }

    private void getMovingToPositionCommandedState() {
        mCommand.elevatorLowGear = false;
        mCommand.openLoopElevator = false;
    }

    // MANUAL
    private SystemState handleManualTransitions(WantedAction wantedAction,
                                                SuperstructureState currentState) {
        if (wantedAction != WantedAction.WANT_MANUAL) {
            // Freeze height.
            mScoringAngle = currentState.angle;
            mScoringHeight = currentState.height;
            return handleDefaultTransitions(WantedAction.GO_TO_POSITION, currentState);
        }
        return handleDefaultTransitions(wantedAction, currentState);
    }

    private void getManualCommandedState() {
        mCommand.elevatorLowGear = mManualWantsLowGear;
        mCommand.wristAngle = SuperstructureConstants.kWristMinAngle;
        mCommand.openLoopElevator = true;
        mCommand.openLoopElevatorPercent = mOpenLoopPower;
    }
}

This is obviously some top-tier FRC code, but there the limitations of java make things harder to understand without running the code in your head. First, to avoid insane verbosity, the actual manipulation of data is pushed far away from the states and into other classes or methods. This kind of encapsulation is useful, but can quickly get messy as we write more and more getter/setter type methods that just hide actual values from the programmer. In my opinion, it should be avoided as long as the state we’re dealing with is relatively small.

Meme about RRIR

A motivating example might be:

#[derive(Debug, Copy, Clone)]
enum WantedAction {
    Idle,
    GoToPosition(height: f64),
    Manual(gear: Gear, open_loop: f64),
}

#[derive(Debug, Copy, Clone)]
enum SystemState {
    HoldingPosition
    MovingToPosition
    Manual(gear: Gear, open_loop: f64)
}

Immediately we can see one benefit: mManualWantsLowGear and mOpenLoopPower are now part of the state, not separate variables. There is no possibility we update the state to manual without also ensuring the operator’s gear shift or speed change is accounted for.

Rust’s match syntax also allows us to greatly reduce the length and visual complexity of the update method:

struct StateMachine {
    state: SystemState,
    // other fields
}
impl StateMachine {
    pub fn update(
        &mut self,
        want: WantedAction,
        current: SuperStructureState,
    ) -> &SuperStructureCommand {
        // Handle state transitions

        // Match statements can be expressions
        let new_state = match (self.state, want) {
            // `_` matches everything
            (SystemState::Manual(_, _), WantedAction::Manual(_, _)) => {
                self.default_transition(want, current)
            }
            (SystemState::Manual(_, _), _) => {
                // blocks can be expressions
                self.scoring_angle = current.angle;
                self.scoring_height = current.height;
                // evaluates to this:
                self.default_transition(want, current)
            }
            (state, want) => self.default_transition(want, current),
        };
        self.state = new_state;
        // Pump elevator planner only if not jogging.
        if (!self.command.openLoopElevator) {
            self.commanded_state = self.planner.update(current);
            self.command.height = self.commanded_state.height.min(self.max_height);
            self.command.wrist_angle = self.commanded_state.angle;
        }

        // Handle state outputs
        match self.state {
            // match OR syntax
            SystemState::HoldingPosition | SystemState::MovingToPosition => {
                self.command.elevator_low_gear = false;
                self.command.open_loop_elevator = false;
            }
            SystemState::Manual(gear, cmd) => {
                self.command = SuperStructureCommand {
                    elevator_low_gear: gear.is_low(), // this datum is extracted from the match
                    wrist_angle: constants::WRIST_MIN_ANGLE,
                    open_loop_elevator: true,
                    open_loop_elevator_percent: cmd, // so is this one
                    ..self.command // Record syntax copies fills the rest of the fields
                                   // This means we effectively mutate the 4 specified fields
                }
            }
        }
        &self.command
    }

    fn default_transition(&self, want: WantedAction, current: SuperStructureState) -> SystemState {
        match want {
            WantedAction::GoToPosition if self.score_position_changed() => {
                self.update_motion_planner(current);
                SystemState::MovingToPosition
            }
            WantedAction::GoToPosition if self.planner.finished(current) => {
                SystemState::HoldingPosition
            }
            WantedAction::Manual(gear, cmd) => SystemState::Manual(gear, cmd),
            // all match statements must be exhaustive, `_` is a catch-call cop-out
            _ => match self.state {
                SystemState::MovingToPosition if self.planner.finished(current) => {
                    SystemState::MovingToPosition
                }
                _ => SystemState::HoldingPosition,
            },
        }
    }
}

In my subjective opinion, the match statements reflect the programmers intention more clearly; we can see the cases we care about laid out before us. The match statement beginning with let new_state = match (self.state, want) could be read as, “Case 1: Both current and desired states are manual; Case 2: Current State is Manual, but desired is something else; Case 3: anything else.” I also inlined a few methods, but that’s just my personal preference. With more work, you could further integrate the data into the enums, and perhaps simplify things more.

Of course, if you’ve never read rust code before, that probably looked incomprehensible. The regularity of the syntax makes it pretty easy to pick-up, though!

Meme of Rust Syntax and Zero-Cost abstractions

No Nulls

Rust’s enums are used extensively to replace functionality in other languages. For example, there are no nulls in rust; there are also no exceptions. That means NullPointerException is now impossible twice! For C/C++ people, so are segmentation faults. This really enhances collaboration, in my experience. Nobody has to check for something being null anymore; you can just be sure it’s not. If you need nullable data, use the Option type in the standard library.

enum Option<T> {
    Some(T),
    None
}

When data is nullable in this way, the programmer is forced to think about what to do in the null case or explicitly opt-in to a runtime panic (aborting of the program). A similar enum is used to replace exceptions.

Struct Instantiation Syntax

Initializing large data classes in Java is a huge pain. If you want default values, you have to either write a factory or a new constructor for each default parameter. In C/C++, you can instantiate a struct directly, but you have to opt-in to specifying all fields. Most people opt instead to mutate each value directly, which is a little extra verbose. I’m sure those using the new TalonSRX::ConfigAll method family have felt this copy-paste pain.

Meme of Rewrite time

let mut mt = TalonSRX::new(config::MASTER_TALON);
mt.config_all(
    &TalonSRXConfig {
        primaryPID: TalonSRXPIDSetConfiguration {
            selectedFeedbackSensor: FeedbackDevice::CTRE_MagEncoder_Relative,
            ..Default::default()
        },
        base: BaseMotorConfig {
            forwardSoftLimitThreshold: Self::MAX_HEIGHT_TICKS,
            forwardSoftLimitEnable: true,
            reverseSoftLimitThreshold: Self::MIN_HEIGHT_TICKS,
            reverseSoftLimitEnable: true,
            voltageCompSaturation: 12.0,
            // Stage one slot
            slot_0: SlotConfiguration {
                kP: 0.18,
                kI: 0.0,
                kD: 4.0,
                kF: 0.04,
                integralZone: 0,
                allowableClosedloopError: 0,
                maxIntegralAccumulator: 0.0,
                closedLoopPeakOutput: 1.0,
                closedLoopPeriod: 1,
            },
            // stage two slot (more rigid)
            slot_1: SlotConfiguration {
                kP: 0.23,
                kI: 0.0,
                kD: 2.0,
                kF: 0.04,
                integralZone: 0,
                allowableClosedloopError: 0,
                maxIntegralAccumulator: 0.0,
                closedLoopPeakOutput: 1.0,
                closedLoopPeriod: 1,
            },
            motionCruiseVelocity: 12500,
            motionAcceleration: 10000,
            motionProfileTrajectoryPeriod: 0,
            closedloopRamp: 0.1,
            openloopRamp: 0.1,
            ..Default::default()
        },
        continuousCurrentLimit: 20,
        peakCurrentLimit: 35,
        peakCurrentDuration: 200,
        ..Default::default()
    },
    3000,
)
.expect("CONFIG ALL FAILED");

The Rust compiler will force us to either initialize all fields, or we can fill in the remaining fields using default values by just writing ..Default::default().

The syntax is even more useful for “data classes”:

final class SwerveKinematics {
    public final double forward;
    public final double strafe;

    public SwerveKinematics(int forward, int strafe) {
        this.forward = forward;
        this.strafe = strafe;
    }

    @Override
    public String toString() {
        return "SwerveKinematics(strafe: " + strafe + ", forward: " + forward + ")";
    }

    // hashCode, equals
}
double heading = 1.0;
SwerveKinematics p = new SwerveKinematics(
    forward * Math.cos(hdg) + strafe * Math.sin(hdg),
    -forward * Math.sin(hdg) + strafe * Math.cos(hdg)
);
#[derive(Debug, Copy, Clone, PartialEq, PartialOrd, Hash)]
struct SwerveKinematics {
    x: f64,
    y: f64,
}
let hdg = 1.0;
let p = SwerveKinematics {
    forward: forward * hdg.cos() + strafe * hdg.sin(),
    strafe: -forward * hdg.sin() + strafe * hdg.cos(),
};

Traits for Extending Foreign Types

There’s a common pattern you’ll see in lots of FRC code:

foo(convertTicksToMeters(talon.getSelectedSensorPosition()))
// OR, possibly worse
foo(talon.getSelectedSensorPosition() * METERS_PER_TICK)

In C++, this is even worse, as integers can be implicity converted to floats and vice-versa. Though conscientious users will enable compiler warnings and -Werror to catch this, inexperienced users easily could leave out the unit conversion function. Some people may use a wrapper (JVM: “yay, more pointers to chase!”), but in Rust we have two other lines of defense we can turn to first. First, there are no implicit conversions of any kind. Second, we can get converted units directly from the Talons using a custom trait.

trait TypedQuadrature {
    // The result type represents that the function can error.
    // These results are an enum that is either a successful calculation of a Meter<f64>
    // or an error that has an error code defined by CTRE (TalonSRX manufacturers).
    fn pos(&self) -> ctre::Result<Meter<f64>>;
    fn vel(&self) -> ctre::Result<MeterPerSecond<f64>>;
}

impl TypedQuadrature for TalonSRX {
    fn pos(&self) -> ctre::Result<Meter<f64>> {
        self.get_quadrature_position()
            .and_then(|ticks| Ok(f64::from(ticks) * crate::config::drive::ENCODER_METERS_PER_TICK))
    }

    fn vel(&self) -> ctre::Result<MeterPerSecond<f64>> {
        // ...
    }
}

A trait is like an interface, but cooler. In this case, we are using the special feature of traits that they can be implemented on types we didn’t declare ourselves. Second, traits can be scoped. Rust’s module system ensures that nobody outside the drive module where I defined the TypedQuadrature method can use these. Only the drive talons will have this functionality. Traits are also how Rust parameterizes generic structs and their method implementations, but that’s beyond the scope of this post.

More importantly, though, is that this makes it easier to do the right thing, not harder. The new invocation, talon.pos() is easier to remember and just as fast. Also, if you use a custom type for meters, the type system will catch when you forget. The Meter<f64> and MeterPerSecond<f64> are types from dimensioned crate (another word for compilation unit or library), and they do compile-time dimensional analysis to catch any unit errors. For example, multiplying MeterPerSecond<f64> * Second<f64> yields a Meter<f64>. If you forget to multiply by seconds, the compiler will complain. The unit checking is zero-cost; the extra struct holding the internal f64 is optimized away when llvm optimizations are enabled.

Threading

Threading is hard. Data races suck. Many teams ignore their utility for this reason or just ignore race conditions entirely. Rust is here to make things easier.

Protection From Data Races

Rust provides “fearless concurrency”: compile time prevention of data races. The Rust book talks about how the compiler achieves this. Rust’s threading model does more than just prevent data races; it forces the programmer to think about race conditions.

Take for example this pattern 114 used in 2018, shamelessly copied from 254, put into pseudo-python.

# Global mutable data
class RobotState:
    robotPosition = startingPosition

class Drive:
    # Run in the subsystem executor thread, in a loop
    def mainLoop():
        # Update drivetrain position
        currentPos = robotState.getPos() # get the old position
        # Update the position:
        newPos = updatePos(currentPos, encoderData, gyroData) # Another thread can sneek in right here!
        # Put the new position
        robotState.put(newPos)

class AutoRoutineOne:
    # Run in  the autonomous thread
    def do():
        robotState.put(pathStartPos) # If do this while the new position is being calculated, our start position gets overwritten
        path.execute()

The race condition materialized when we tried to zero the position to start an autonomous routine. Sometimes, it just didn’t work, because the new position from the autonomous thread was overwritten by the result of the computation in the drive thread.

Rust would not allow this code to compile, because there’s no synchronization of data between the threads, introducing possible data races. There are lots of ways to fix this (associating starting positions with paths, using channels, and other ways of eliminating global mutable state), but doing the minimal amount of work motivates adding a mutex. Though the Rust compiler would allow you to write this race condition even with a mutex (if you choose lock and unlock twice in mainLoop(), as in any language), once we think about, it’s obvious that the drive thread ought to hold the mutex for the whole calculation.

Message Passing

Many popular FRC teams, including 971, 1678, and 900, design their codebases around passing messages to different subsystems. 971 and 900 use one process per subsystem, but 1678 uses one thread per subsystem. Both have their merits. Teaching this paradigm to students is undoubtedly valuable; it is the modern paradigm of Robotics and systems programming in general, see ROS and the popularity of microservices. The following comparisons are based on my small knowledge of large codebases; I am likely to be wrong about some things.

Rust could make this paradigm more accessible, as it comes with a built-in mechanism for this sort of thing: channels. These match almost exactly what 1678 implemented themselves for cross-thread message passing. Neither C++ nor java come with channels built in. Rust does come with channels in the STL, and a one-line change to Cargo.toml gives access to the fastest channels in the world.

971 and 1678 use protobuf, or something close to it. 900 uses ROS messages, which come with a host of their own benefits. If you were to use Rust for inter-process messages, you would also want to use some sort of serialization format, such as protobuf or the newer flatbuffers which has Rust support. Bincode is a Rust-only serialization format that has seen reasonable popularity, too. It also supports code generation using only Rust macros, as exemplified below, no external program or build system needed.

However, if things stay within one process, we can just as easily use normal structs and enums. Rust enums also allow us to put multiple message types into one channel. This does, however, inflate smaller variants to take up the same space as the largest variant, and there is a byte or two taken up by the enum tag. You can always, of course, serialize a single message type per channel, as appears to be the norm in 1678 and 971’s code. This complexity tradeoff can be decided individually. On 114, we are going to look into implementing a 1678-style message passing system in Rust. So far we’ve focused more on getting Rust working fully. I’ll update this post with the results.

Speed without Compromise

Rust is fast, like, really fast. Like C++, Rust pursues “Zero-Cost Abstractions”. The same things that make C++ fast make Rust fast: AOT compilation/optimization, stack allocation, and monomorphization of templates (C++) and type parameters (Rust). In the language benchmarks game, Rust and C/C++ bunnyhop each other regularly.

Both Rust and C++ immediately have the upper hand over Java. Moreover, Java will rarely get to reap its long-term JIT benefits in a 3-minute match. I’m not 100% certain whether HotSpot is even packaged with the RoboRIO JRE. Though many teams do fine with Java, some, like 971 and 1678, need the speed of C++ to run their control loops at the desired speed. Lower latency and higher throughput are relatively easy ways to increase the performance of any controller.

Rust can match C++’s speed and feature set while remaining memory-safe, thread-safe, and user-friendly, which is critical for…

Teaching and Trusting New Programmers

Learning and keeping up with C++ is hard. Not even Bjarne Stroustrup can keep it all in his head at once.

C++ has indeed become too “expert friendly” at a time where the degree of effective formal education of the average software developer has declined. However, the solution is not to dumb down the programming languages but to use a variety of programming languages and educate more experts. There has to be languages for those experts to use– and C++ is one of those languages. [src]

The Rust team has made teachability a priority. Every RFC to the language has to have a section on how to teach the change to new users. Many new programmers we get in FRC come already have experience in a language like Java or Python. There are far fewer new concepts to learn when transitioning from these languages to Rust than to C++.

The Friendly Compiler

A meme of a compiler error

Anytime a rust program fails to compiles, the Rust compiler will point the user towards documentation that includes examples. Often, it can suggest how to fix the issue, and 95% of the time its suggestion is correct. Though GCC and Clang have been making some strides towards friendlier error messages, the Rust compiler remains unmatched.

Rust explaining to replace an ambiguous unicode character

Safety

C++ kills memory safety

A new programmer learning Rust will never have to learn the words:

  • Segmentation Fault
  • Null Dereference
  • Dangling reference or Use After Free
  • Buffer overflow
  • Pointer
  • Data Race
  • Uninitialized
  • Valgrind

Valgrind vs rust compiler

More importantly, this means that when reviewing Rust code, I don’t have to worry about:

  • Segmentation Faults
  • Null Dereferences
  • Dangling references or Use After Frees
  • Buffer overflows
  • Pointers
  • Data Races
  • Uninitialized Variables
  • Learning how to actually use Valgrind correctly

(Of course you and I, Dean’s own blessed programming pedagogues, could never make such errors ourselves! /sarcasm)

How Rust actually achieves these guarantees is beyond the scope of this post, but is an interesting topic in and of itself, covered in most Rust tutorials.

The Build System

I didn’t understand make, CMake, or any C/C++ build system until I became familiar with the various steps involved: preprocessing, compilation to object code, linking, etc. You can hardly split your code into files without running into an issue if you’re not careful. Cargo, rust’s build system, removes the need to consider this complexity. Instead of writing rules for separate files, you just specify a single library or executable entry point, some metadata, and you’re done. Cargo and the module system takes care of incremental compilation and separating code into different files. Adding a dependency is one line in a config file, like with node/npm. No vendoring, no gradle, no maven.

[package]
name = "NAME"
version = "0.1.0"
authors = ["Name <name@name.name>"]
edition = "2018"

[lib]
name = "my_lib"
path = "src/lib.rs"

[[bin]]
name = "my_bin"
path = "src/main.rs"

[dependencies]
ctre = { git = "https://github.com/Lytigas/ctre-rs.git", branch = "2019-cargo" }
wpilib = "0.4.0"

[package.metadata.frc]
rio-address = "10.1.14.2"
target-dir = "./target"
executable-name = "my_lib"

Style and Convention

Rust has an official formatting tool, and it’s very good. There’s no question how to format your code; the autoformatter does it for you. It can be installed using the official compiler management tool, rustup:

rustup component add rustfmt
cargo fmt

Done. Forget about debating formatting style and teaching it to new programmers. Also, compiler lints enforce proper case is used for variable names, functions, structs, etc.

Compiler warnings chase man

Other various bad practices can be found automatically by clippy, a linting tool. Each lint is another thing you can’t mess up. Here are some notable ones you might see in FRC.

Low-Level Access

Rust has worked hard to make interop with existing platforms easy. If for some weird reason 971 is looking to use Rust, they won’t have to look very far to find all the syscalls they use; the libc crate has all of them already. If not, adding one is as simple as adding a function signature, even I could do it!

Any C project is relatively easy to integrate with. bindgen can automatically generate rust declarations for the FFI, and cc can even compile C code at Rust build time. When writing Rust bindings to WPILib and CTRE Phoenix, I used bindgen heavily.

Getting More Batteries and Generating New Batteries

The Rust standard library includes some useful things C++ does not have, like a nice TCP/UDP interface, but the real strength of rust is the ease of dependency management and breadth of crates (the Rust ecosystem version of a downloadable package or compilation unit) on crates.io, the Rust crate registry. Here’s a list of some packages we used in 2019.

[dependencies]
ctre = { git = "https://github.com/Lytigas/ctre-rs.git", branch = "2019-cargo" }
navx = { git = "https://github.com/Eaglestrike/navx-rs.git" }
debug_stub_derive = "0.3" # Allows #[derive(Debug)] for structs with non-Debug types.
lazy_static = "1" # Lazy initialization of non-constant statics
wpilib = "0.4.0" # WPILib functionality
crossbeam-channel = "0.3.6" # Very, very Fast channels
bus = "2.0.1" # Fast broadcast channels
alga = "0.7.2" # Abstract Algebra traits for generics
csv = "1.0.0" # CSV dumps
dimensioned = "0.7.0" # Compile time dimensional analysis
approx = "0.3.0" # Approximate equality comparisons of floats
paste = "0.1.4" # Create/Paste identifiers in macros
serde = "1.0.84" # Serialization framework
serde_derive = "1.0.84" # Code generation for serialization
serde_cbor = "0.9.0" # Serialize into the CBOR format
ctre-elevator-tuning = {path = "ctre-elevator-tuning"} # Separate package to hold elevator code

To show off the power of some crates and Rust’s macros, let’s look at some serialization code to turn a vision packet into a CBOR array. First, in C++ using Intel’s TinyCbor.

#define CBOR_CHCK(call) \
    { \
        CborError err = call; \
        if (err != CborNoError) { \
            throw err; \
        } \
    }

#define CBOR_VAL(call) \
    if (!call) { \
        throw CborError::CborErrorImproperValue; \
    }

struct Packet {
    int64_t micros;
    float x;
    float y;

    size_t cbor_serialize(uint8_t *buffer, size_t maxlen) const
    {
        CborEncoder encoder, arrayEncoder;
        cbor_encoder_init(&encoder, buffer, maxlen, 0);
        CBOR_CHCK(cbor_encoder_create_array(&encoder, &arrayEncoder, 3));
        CBOR_CHCK(cbor_encode_int(&arrayEncoder, this->micros));
        CBOR_CHCK(cbor_encode_float(&arrayEncoder, this->x));
        CBOR_CHCK(cbor_encode_float(&arrayEncoder, this->y));
        CBOR_CHCK(cbor_encoder_close_container(&encoder, &arrayEncoder));
        return cbor_encoder_get_buffer_size(&encoder, buffer);
    };
    static Packet cbor_deserialize(uint8_t *buffer, size_t datalen)
    {
        CborParser parser;
        CborValue value;
        Packet result;
        CBOR_CHCK(cbor_parser_init(buffer, datalen, 0, &parser, &value));

        CborValue inArray;
        if (!cbor_value_is_array(&value))
            throw CborError::CborErrorImproperValue;

        CBOR_CHCK(cbor_value_enter_container(&value, &inArray));

        CBOR_VAL(cbor_value_is_integer(&inArray));
        CBOR_CHCK(cbor_value_get_int64(&inArray, &(result.micros)));

        CBOR_CHCK(cbor_value_advance(&inArray));
        CBOR_VAL(cbor_value_is_float(&inArray));
        CBOR_CHCK(cbor_value_get_float(&inArray, &(result.x)));

        CBOR_CHCK(cbor_value_advance(&inArray));
        CBOR_VAL(cbor_value_is_float(&inArray));
        CBOR_CHCK(cbor_value_get_float(&inArray, &(result.y)));

        CBOR_CHCK(cbor_value_leave_container(&value, &inArray));
        return result;
    };
};

and here’s that same code in Rust:

#[derive(Debug, Copy, Clone, Serialize, Deserialize)]
pub struct Packet {
    micros: u64,
    x: f32,
    y: f32,
}

The #[derive(Serialize, Deserialize)] generates the serialization and deserialization code for you. The struct is now usable with any serde-compatible serializer, such as serde_cbor. Other crates can also integrate with serde. For example, the csv crate can optionally serialize a Serialize-implemented struct directly, without any boilerplate code.

Similarly, #[derive(Debug)] implements the Debug trait, which specifies how to print a struct for debugging, with pretty-printing available for large structs. No more implementing toString() or cout << s.a << '\t' << s.b << '\t' << s.c << '\t' << s.d << endl; Rust also has compile-time code generation for a Python-like formatting syntax. Here’s an example of pretty-printing:

use std::collections::HashMap;

#[derive(Debug)]
struct Bar<'a> {
    r: &'a str,
}

#[derive(Debug)]
struct Foo<'a> {
    s: String,
    v: Vec<Bar<'a>>,
    h: HashMap<&'a str, &'a Bar<'a>>,
}

fn main() {
    let mut h = HashMap::new();
    h.insert("key1", &Bar {r: "val1"});
    h.insert("key2", &Bar {r: "val2"});

    let f = Foo {
        s: "Name of Foo".to_owned(),
        v: vec![Bar {r: "123"}, Bar {r: "456"}, Bar {r: "789"},],
        h,
    };
    println!("{:#?}", f);
}
Foo {
    s: "Name of Foo",
    v: [
        Bar {
            r: "123"
        },
        Bar {
            r: "456"
        },
        Bar {
            r: "789"
        }
    ],
    h: {
        "key2": Bar {
            r: "val2"
        },
        "key1": Bar {
            r: "val1"
        }
    }
}

In general, macros in Rust operate on the Abstract Syntax Tree, rather than raw strings like C/C++ macros. These operations are either in the form of token substitutions (declarative macros) or a function that operates on a token stream (procedural macros). The #[derive(...)]s above are the latter kind, and somewhat difficult to write. But the former kind is easy to use to automate repetitive code writing.

114 uses our own edge detection for human button actions, and we also separate binding a named action to a button from binding that named action to a sent message. In other words, we create a Controller trait (you can think interface). This means lots of repetitive code, which can be replaced a macro.

extern crate paste;
macro_rules! wrapper_fields { // declarative macro definition
    // This case of the macro accepts one identifier token, saved in $name,
    // and then a series of identifiers separated by commas saved in $x
    ($name:ident, $( $x:ident ),*) => {
        // These tokens will be substituted into the compiled code, creating a struct named $name.
        #[derive(Debug)]
        pub struct $name<T: Controls> {
            c: T,
        $(
            // Create an EdgeDetector field for each identifier in $x
            $x: EdgeDetector,
        )*
        }

        impl<T: Controls> $name<T> {
            pub fn new(c: T) -> Self {
                Self {
                    c,
                    // Initialize the EdgeDetectors with EdgeDetector::new()
                    $(
                        $x: EdgeDetector::new(),
                    )*
                }
            }

            // Create a getter that puts the button value into the EdgeDetector
            $(
                pub fn $x (&mut self) -> Edge {
                    self.$x.get(self.c.$x())
                }
            )*

            // Create a raw getter
            paste::item! {
                $(
                    pub fn [<$x _raw>] (&mut self) -> bool {
                        self.c.$x()
                    }
                )*
            }
        }
    };
}
wrapper_fields! { EdgeWrapper,
    low_gear,
    quick_turn,
    ball_intake,
    abort_ball_intake,
    outtake_ball,
    // etc.
}

// Add methods for non-boolean items
// This can be read as, "if a type T implements the Controls trait, then EdgeWrapper<T> implements the following methods"
impl<T: Controls> EdgeWrapper<T> {
    pub fn throttle(&mut self) -> f64 {
        self.c.throttle()
    }

    pub fn wheel(&mut self) -> f64 {
        self.c.wheel()
    }
}

The macro generates code for a struct that wraps any struct implementing Controls and has functions for edge-detecting all of the specified members:

pub struct EdgeWrapper<T: Controls> {
    c: T,
    low_gear: EdgeDetector,
    quick_turn: EdgeDetector,
    ball_intake: EdgeDetector,
    abort_ball_intake: EdgeDetector,
    outtake_ball: EdgeDetector,
    // etc
}
impl <T: Controls> EdgeWrapper<T> {
    pub fn new(c: T) -> Self {
        Self{
            c,
            low_gear: EdgeDetector::new(),
            quick_turn: EdgeDetector::new(),
            ball_intake: EdgeDetector::new(),
            abort_ball_intake: EdgeDetector::new(),
            outtake_ball: EdgeDetector::new(),
            // etc
        }
    }
    pub fn low_gear(&mut self) -> Edge {
        self.low_gear.get(self.c.low_gear())
    }
    pub fn quick_turn(&mut self) -> Edge {
        self.quick_turn.get(self.c.quick_turn())
    }
    pub fn ball_intake(&mut self) -> Edge {
        self.ball_intake.get(self.c.ball_intake())
    }
    pub fn abort_ball_intake(&mut self) -> Edge {
        self.abort_ball_intake.get(self.c.abort_ball_intake())
    }
    pub fn outtake_ball(&mut self) -> Edge {
        self.outtake_ball.get(self.c.outtake_ball())
    }
    // etc
    pub fn low_gear_raw(&mut self) -> bool {
        self.c.low_gear()
    }
    pub fn quick_turn_raw(&mut self) -> bool {
        self.c.quick_turn()
    }
    pub fn ball_intake_raw(&mut self) -> bool {
        self.c.ball_intake()
    }
    pub fn abort_ball_intake_raw(&mut self) -> bool {
        self.c.abort_ball_intake()
    }
    pub fn outtake_ball_raw(&mut self) -> bool {
        self.c.outtake_ball()
    }
    // etc
}
impl <T: Controls> EdgeWrapper<T> {
    pub fn throttle(&mut self) -> f64 { self.c.throttle() }
    pub fn wheel(&mut self) -> f64 { self.c.wheel() }
}

Though this is somewhat unrelated to macros, combining this with Rusts’s closures leads to fairly declarative code:

// If the abort_ball_intake button is rising (just pressed), send the ForceAbortBall command through the channel to the superstructure
if self.controls.abort_ball_intake().rising() {
    self.superstructure
        .send(SsCmd::ForceAbortBall)
        .expect("SS disconnected");
}
self.controls.ball_intake().sig_send(|is_intk| {
    self.superstructure
        .send(SsCmd::BallIntake(is_intk))
        .expect("SS disconnected");
});

I didn’t understand any of that

I’m glad you read some of it anyway! If you’re at all interested, I’d be happy to help you learn Rust or explain anything I talked about.

I’m intrigued

If you want to use/learn Rust, check out the book and the FRC libraries. This has been my passion project for a while now, and it’d be great if people actually used it. Feel free to reach out to me however you like to talk about Rust/FRC/whatever.