Class PoseEstimator3d<T>

java.lang.Object
edu.wpi.first.math.estimator.PoseEstimator3d<T>
Type Parameters:
T - Wheel positions type.
Direct Known Subclasses:
DifferentialDrivePoseEstimator3d, MecanumDrivePoseEstimator3d, SwerveDrivePoseEstimator3d

public class PoseEstimator3d<T> extends Object
This class wraps Odometry3d to fuse latency-compensated vision measurements with encoder measurements. Robot code should not use this directly- Instead, use the particular type for your drivetrain (e.g., DifferentialDrivePoseEstimator3d). It is intended to be a drop-in replacement for Odometry3d; in fact, if you never call addVisionMeasurement(edu.wpi.first.math.geometry.Pose3d, double) and only call update(edu.wpi.first.math.geometry.Rotation3d, T) then this will behave exactly the same as Odometry3d. It is also intended to be an easy replacement for PoseEstimator, only requiring the addition of a standard deviation for Z and appropriate conversions between 2D and 3D versions of geometry classes. (See Pose3d(Pose2d), Rotation3d(Rotation2d), Translation3d(Translation2d), and Pose3d.toPose2d().)

update(edu.wpi.first.math.geometry.Rotation3d, T) should be called every robot loop.

addVisionMeasurement(edu.wpi.first.math.geometry.Pose3d, double) can be called as infrequently as you want; if you never call it then this class will behave exactly like regular encoder odometry.

  • Constructor Details

    • PoseEstimator3d

      public PoseEstimator3d(Kinematics<?,T> kinematics, Odometry3d<T> odometry, Matrix<N4,N1> stateStdDevs, Matrix<N4,N1> visionMeasurementStdDevs)
      Constructs a PoseEstimator3d.
      Parameters:
      kinematics - A correctly-configured kinematics object for your drivetrain.
      odometry - A correctly-configured odometry object for your drivetrain.
      stateStdDevs - Standard deviations of the pose estimate (x position in meters, y position in meters, z position in meters, and angle in radians). Increase these numbers to trust your state estimate less.
      visionMeasurementStdDevs - Standard deviations of the vision pose measurement (x position in meters, y position in meters, z position in meters, and angle in radians). Increase these numbers to trust the vision pose measurement less.
  • Method Details

    • setVisionMeasurementStdDevs

      public final void setVisionMeasurementStdDevs(Matrix<N4,N1> visionMeasurementStdDevs)
      Sets the pose estimator's trust of global measurements. This might be used to change trust in vision measurements after the autonomous period, or to change trust as distance to a vision target increases.
      Parameters:
      visionMeasurementStdDevs - Standard deviations of the vision measurements. Increase these numbers to trust global measurements from vision less. This matrix is in the form [x, y, z, theta]ᵀ, with units in meters and radians.
    • resetPosition

      public void resetPosition(Rotation3d gyroAngle, T wheelPositions, Pose3d poseMeters)
      Resets the robot's position on the field.

      The gyroscope angle does not need to be reset here on the user's robot code. The library automatically takes care of offsetting the gyro angle.

      Parameters:
      gyroAngle - The angle reported by the gyroscope.
      wheelPositions - The current encoder readings.
      poseMeters - The position on the field that your robot is at.
    • resetPose

      public void resetPose(Pose3d pose)
      Resets the robot's pose.
      Parameters:
      pose - The pose to reset to.
    • resetTranslation

      public void resetTranslation(Translation3d translation)
      Resets the robot's translation.
      Parameters:
      translation - The pose to translation to.
    • resetRotation

      public void resetRotation(Rotation3d rotation)
      Resets the robot's rotation.
      Parameters:
      rotation - The rotation to reset to.
    • getEstimatedPosition

      Gets the estimated robot pose.
      Returns:
      The estimated robot pose in meters.
    • sampleAt

      public Optional<Pose3d> sampleAt(double timestampSeconds)
      Return the pose at a given timestamp, if the buffer is not empty.
      Parameters:
      timestampSeconds - The pose's timestamp in seconds.
      Returns:
      The pose at the given timestamp (or Optional.empty() if the buffer is empty).
    • addVisionMeasurement

      public void addVisionMeasurement(Pose3d visionRobotPoseMeters, double timestampSeconds)
      Adds a vision measurement to the Kalman Filter. This will correct the odometry pose estimate while still accounting for measurement noise.

      This method can be called as infrequently as you want, as long as you are calling update(edu.wpi.first.math.geometry.Rotation3d, T) every loop.

      To promote stability of the pose estimate and make it robust to bad vision data, we recommend only adding vision measurements that are already within one meter or so of the current pose estimate.

      Parameters:
      visionRobotPoseMeters - The pose of the robot as measured by the vision camera.
      timestampSeconds - The timestamp of the vision measurement in seconds. Note that if you don't use your own time source by calling updateWithTime(double,Rotation3d,Object) then you must use a timestamp with an epoch since FPGA startup (i.e., the epoch of this timestamp is the same epoch as Timer.getFPGATimestamp().) This means that you should use Timer.getFPGATimestamp() as your time source or sync the epochs.
    • addVisionMeasurement

      public void addVisionMeasurement(Pose3d visionRobotPoseMeters, double timestampSeconds, Matrix<N4,N1> visionMeasurementStdDevs)
      Adds a vision measurement to the Kalman Filter. This will correct the odometry pose estimate while still accounting for measurement noise.

      This method can be called as infrequently as you want, as long as you are calling update(edu.wpi.first.math.geometry.Rotation3d, T) every loop.

      To promote stability of the pose estimate and make it robust to bad vision data, we recommend only adding vision measurements that are already within one meter or so of the current pose estimate.

      Note that the vision measurement standard deviations passed into this method will continue to apply to future measurements until a subsequent call to setVisionMeasurementStdDevs(Matrix) or this method.

      Parameters:
      visionRobotPoseMeters - The pose of the robot as measured by the vision camera.
      timestampSeconds - The timestamp of the vision measurement in seconds. Note that if you don't use your own time source by calling updateWithTime(double, edu.wpi.first.math.geometry.Rotation3d, T), then you must use a timestamp with an epoch since FPGA startup (i.e., the epoch of this timestamp is the same epoch as Timer.getFPGATimestamp()). This means that you should use Timer.getFPGATimestamp() as your time source in this case.
      visionMeasurementStdDevs - Standard deviations of the vision pose measurement (x position in meters, y position in meters, z position in meters, and angle in radians). Increase these numbers to trust the vision pose measurement less.
    • update

      public Pose3d update(Rotation3d gyroAngle, T wheelPositions)
      Updates the pose estimator with wheel encoder and gyro information. This should be called every loop.
      Parameters:
      gyroAngle - The current gyro angle.
      wheelPositions - The current encoder readings.
      Returns:
      The estimated pose of the robot in meters.
    • updateWithTime

      public Pose3d updateWithTime(double currentTimeSeconds, Rotation3d gyroAngle, T wheelPositions)
      Updates the pose estimator with wheel encoder and gyro information. This should be called every loop.
      Parameters:
      currentTimeSeconds - Time at which this method was called, in seconds.
      gyroAngle - The current gyro angle.
      wheelPositions - The current encoder readings.
      Returns:
      The estimated pose of the robot in meters.