Behavior:Dead Reckoning
| Dead Reckoning | |
|---|---|
| Type | Behavior (Algorithm) |
| Requires Capabilities | Capability:Optical Odometry or Capability:IMU Sensing, Capability:Differential Drive |
| Enables Activities | Activity:Dead Reckoning Navigation |
| Difficulty | Beginner to Intermediate |
| Implementations | SimpleBot:Dead Reckoning Implementation |
| Status | Fully Documented |
Dead reckoning is a navigation technique where a robot estimates its current position by tracking all motion from a known starting position. The term comes from maritime navigation ("deduced reckoning") and involves continuously calculating position based on direction and distance traveled, without external reference points.
Overview
Dead reckoning works by:
- Starting from a known position (typically x=0, y=0, heading=0)
- Measuring all motion (linear and angular)
- Continuously updating the estimated position based on motion measurements
- Maintaining a pose estimate (x position, y position, heading angle θ)
Unlike GPS or beacon-based navigation, dead reckoning is completely self-contained and works in any environment. However, small measurement errors accumulate over time, causing position estimates to drift from the true position.
Odometry-Based Dead Reckoning
Odometry-based dead reckoning uses wheel encoder measurements to track motion. This is the most common approach for ground robots and is the method used by SimpleBot.
Principle of Operation
For a differential drive robot:
- Count encoder pulses from left and right wheels independently
- Calculate distance traveled by each wheel using known wheel circumference and encoder resolution
- Determine robot motion based on wheel motion differences:
- Equal wheel distances → Robot moves straight
- Different wheel distances → Robot follows a curved path (arc)
- Update robot pose (x, y, θ) based on calculated motion
Mathematical Foundation
The basic calculations for odometry-based dead reckoning:
Step 1: Calculate wheel distances
distance_left = left_pulses × distance_per_pulse distance_right = right_pulses × distance_per_pulse
Where:
distance_per_pulse = (wheel_circumference) / (pulses_per_revolution) wheel_circumference = π × wheel_diameter
Step 2: Calculate motion parameters
distance_traveled = (distance_left + distance_right) / 2 angle_turned = (distance_right - distance_left) / wheel_separation
Step 3: Update pose
For straight motion (distance_left ≈ distance_right):
x_new = x_old + distance_traveled × cos(θ_old) y_new = y_old + distance_traveled × sin(θ_old) θ_new = θ_old
For curved motion (general case):
θ_new = θ_old + angle_turned x_new = x_old + distance_traveled × cos(θ_old + angle_turned/2) y_new = y_old + distance_traveled × sin(θ_old + angle_turned/2)
Pseudocode Example
# Initialize robot pose
pose_x = 0.0
pose_y = 0.0
pose_theta = 0.0
# Robot physical parameters
WHEEL_DIAMETER = 0.065 # meters
WHEEL_SEPARATION = 0.15 # meters
PULSES_PER_REV = 20
# Calculate distance per pulse
WHEEL_CIRCUMFERENCE = math.pi * WHEEL_DIAMETER
DISTANCE_PER_PULSE = WHEEL_CIRCUMFERENCE / PULSES_PER_REV
def update_odometry(left_pulses, right_pulses):
global pose_x, pose_y, pose_theta
# Calculate distances traveled by each wheel
dist_left = left_pulses * DISTANCE_PER_PULSE
dist_right = right_pulses * DISTANCE_PER_PULSE
# Calculate center distance and rotation
dist_center = (dist_left + dist_right) / 2.0
delta_theta = (dist_right - dist_left) / WHEEL_SEPARATION
# Update pose using midpoint method
if abs(delta_theta) < 0.001: # Straight motion
pose_x += dist_center * math.cos(pose_theta)
pose_y += dist_center * math.sin(pose_theta)
else: # Curved motion
# Use heading at midpoint of arc
mid_theta = pose_theta + delta_theta / 2.0
pose_x += dist_center * math.cos(mid_theta)
pose_y += dist_center * math.sin(mid_theta)
pose_theta += delta_theta
# Normalize angle to [-π, π]
pose_theta = math.atan2(math.sin(pose_theta), math.cos(pose_theta))
return (pose_x, pose_y, pose_theta)
# Main loop
while robot_running:
# Read encoder increments since last update
left_pulses = read_left_encoder_delta()
right_pulses = read_right_encoder_delta()
# Update position estimate
position = update_odometry(left_pulses, right_pulses)
# Use position for navigation decisions
navigate_using_position(position)
Odometry Error Sources
Common error sources in odometry-based dead reckoning:
- Wheel slippage: Wheels slip on smooth surfaces or during rapid acceleration
- Uneven wheel diameters: Manufacturing variations or uneven wear
- Mechanical play: Backlash in gears and drivetrain
- Encoder resolution: Limited pulse count causes quantization errors
- Surface irregularities: Bumps, cracks, and slopes affect wheel rotation
- Wheel alignment: Misaligned wheels cause systematic drift
IMU-Based Dead Reckoning
IMU-based dead reckoning uses inertial measurement unit sensors (accelerometers and gyroscopes) to track motion. This approach is more complex than odometry and requires sophisticated sensor fusion.
Principle of Operation
- Measure acceleration using 3-axis accelerometer
- Remove gravity component from acceleration readings
- Integrate acceleration to obtain velocity
- Integrate velocity to obtain position
- Measure angular velocity using 3-axis gyroscope
- Integrate gyroscope readings to obtain heading angle
- Apply sensor fusion to combine all measurements and reduce drift
Mathematical Foundation
For position (simplified 2D case):
# Measure acceleration in body frame accel_body = read_accelerometer()
# Rotate to world frame using current heading accel_world_x = accel_body_x × cos(θ) - accel_body_y × sin(θ) accel_world_y = accel_body_x × sin(θ) + accel_body_y × cos(θ)
# Remove gravity (9.81 m/s² in z-axis) accel_world_z = accel_body_z - 9.81
# Integrate to get velocity velocity_x = velocity_x_prev + accel_world_x × dt velocity_y = velocity_y_prev + accel_world_y × dt
# Integrate to get position position_x = position_x_prev + velocity_x × dt position_y = position_y_prev + velocity_y × dt
For heading:
# Measure angular velocity gyro_z = read_gyroscope_z()
# Integrate to get heading θ = θ_prev + gyro_z × dt
Pseudocode Example
# Initialize state
position = [0.0, 0.0, 0.0] # x, y, z
velocity = [0.0, 0.0, 0.0]
theta = 0.0 # heading angle
prev_time = time.now()
# IMU calibration offsets (determined during calibration)
ACCEL_BIAS = [0.05, -0.03, 0.02] # m/s²
GYRO_BIAS = [0.01, 0.02, -0.01] # rad/s
GRAVITY = 9.81 # m/s²
def rotation_matrix_2d(angle):
"""Create 2D rotation matrix"""
return [[math.cos(angle), -math.sin(angle)],
[math.sin(angle), math.cos(angle)]]
def update_imu_dead_reckoning():
global position, velocity, theta, prev_time
# Calculate time step
current_time = time.now()
dt = current_time - prev_time
prev_time = current_time
# Read IMU sensors
accel_body = read_accelerometer() # [ax, ay, az] in body frame
gyro_body = read_gyroscope() # [gx, gy, gz] in body frame
# Apply calibration corrections
accel_body = [accel_body[i] - ACCEL_BIAS[i] for i in range(3)]
gyro_body = [gyro_body[i] - GYRO_BIAS[i] for i in range(3)]
# Update heading from gyroscope
theta += gyro_body[2] * dt # z-axis gyro
# Rotate acceleration to world frame
rot_mat = rotation_matrix_2d(theta)
accel_world_x = rot_mat[0][0] * accel_body[0] + rot_mat[0][1] * accel_body[1]
accel_world_y = rot_mat[1][0] * accel_body[0] + rot_mat[1][1] * accel_body[1]
accel_world_z = accel_body[2]
# Remove gravity from z-axis
accel_world_z -= GRAVITY
# Integrate acceleration to velocity
velocity[0] += accel_world_x * dt
velocity[1] += accel_world_y * dt
velocity[2] += accel_world_z * dt
# Integrate velocity to position
position[0] += velocity[0] * dt
position[1] += velocity[1] * dt
position[2] += velocity[2] * dt
return (position, velocity, theta)
# Main loop
while robot_running:
pose = update_imu_dead_reckoning()
# Use position for navigation
navigate_using_position(pose)
# Sleep to maintain consistent update rate
time.sleep(0.01) # 100 Hz update rate
IMU Error Sources
IMU-based dead reckoning faces significant challenges:
- Gyroscope drift: Bias errors accumulate rapidly when integrated
- Accelerometer bias: Small constant errors become large position errors after double integration
- Sensor noise: Random noise accumulates through integration
- Gravity vector estimation: Errors in removing gravity cause acceleration measurement errors
- Temperature effects: Sensor characteristics change with temperature
- Vibration: High-frequency vibration can bias accelerometer readings
IMU-only dead reckoning typically exhibits much higher drift rates than odometry-based systems. Most practical systems combine IMU data with other sensors (sensor fusion).
Error Accumulation
All dead reckoning systems suffer from unbounded error growth because small measurement errors are integrated over time:
- Odometry drift: Typically 5-10% of distance traveled under good conditions
- Example: After driving 10 meters, position error might be 0.5-1.0 meters
- Error grows approximately linearly with distance
- Systematic errors (like unequal wheel diameters) cause circular drift patterns
- IMU drift: Much worse due to double integration of accelerometer noise
- Position error grows quadratically with time
- Without corrections, IMU-only systems become unusable within seconds
- Gyro drift causes heading errors that compound position errors
- Combined effects: Heading errors particularly problematic
- A 1-degree heading error causes 1.7% lateral drift
- After driving 10 meters with 1° heading error, lateral position error is 0.17 meters
Coordinate Systems
Dead reckoning requires careful management of coordinate frames:
Robot Frame (Body Frame)
- Origin at robot center
- X-axis points forward
- Y-axis points left
- Rotates with the robot
World Frame (Global Frame)
- Fixed reference frame
- Origin at starting position
- X and Y axes aligned with environment
- Does not move
Pose Representation
Robot pose in world frame: (x, y, θ)
- x: Position along world X-axis
- y: Position along world Y-axis
- θ: Heading angle (orientation) relative to world X-axis
Coordinate Transformation
To transform a point from robot frame to world frame:
x_world = x_robot × cos(θ) - y_robot × sin(θ) + pose_x y_world = x_robot × sin(θ) + y_robot × cos(θ) + pose_y
Using rotation matrix form:
[x_world] [cos(θ) -sin(θ)] [x_robot] [pose_x] [y_world] = [sin(θ) cos(θ)] [y_robot] + [pose_y]
Improving Accuracy
Several techniques can reduce dead reckoning errors:
Sensor Fusion
Combine multiple sensor types to leverage their complementary characteristics:
- Odometry + IMU: Odometry for position, IMU for rapid orientation updates
- Complementary filtering: Use IMU for short-term (fast response), odometry for long-term (low drift)
- Kalman filtering: Optimal fusion of multiple noisy sensors
External Reference Points
Reset or correct position estimates when passing known landmarks:
- Beacon detection: Reset position when detecting a marker at known location
- Wall following: Use distance sensors to maintain known offset from wall
- Line detection: Reset position when crossing known lines or boundaries
- Visual landmarks: Use camera to identify known features
Calibration
Reduce systematic errors through careful calibration:
- Wheel diameter: Measure actual effective diameter under load
- Wheel separation: Measure actual track width between wheel contact points
- Encoder resolution: Verify actual pulses per revolution
- IMU bias: Measure sensor offsets while stationary
Closed-Loop Verification
Test dead reckoning accuracy by:
- Driving a closed path (square, circle) and measuring final position error
- Comparing estimated position to known checkpoints
- Recording error statistics over multiple runs
- Adjusting calibration parameters to minimize systematic errors
Advanced Techniques
- Kalman filters: Optimal state estimation with uncertainty quantification
- Particle filters: Handle non-Gaussian errors and multiple hypotheses
- Extended Kalman Filter (EKF): Handle non-linear motion models
- Unscented Kalman Filter (UKF): Better handling of severe non-linearities
SimpleBot Implementation
SimpleBot implements odometry-based dead reckoning using optical wheel encoders:
- Sensor type: Optical encoders on each wheel
- Pulse counting: Interrupt-driven pulse accumulation
- Update rate: Real-time position calculation on each encoder pulse
- Drivetrain: Differential drive with two independent motors
- Capabilities:
- Track current (x, y, θ) pose continuously
- Execute motion patterns (squares, circles, arbitrary paths)
- Return to starting position via reverse path
- Report position estimates for navigation decisions
See SimpleBot:Dead Reckoning Implementation for complete implementation details including:
- Hardware setup (encoder mounting and wiring)
- Interrupt service routines for pulse counting
- Position calculation algorithms
- Calibration procedures
- Example programs and test patterns
Practical Challenges
Challenge 1: Drive and Return
Objective: Test basic dead reckoning accuracy
- Program robot to drive a pattern (square, circle, or arbitrary path)
- Record the motion commands
- Execute the reverse motion sequence to return to start
- Measure final position error
Success criteria:
- Position error < 5% of total distance traveled
- Heading error < 5 degrees
Challenge 2: Right Triangle Test
Objective: Test position accuracy across different motion types
- Drive forward distance D
- Turn 90 degrees right
- Drive forward distance D (now at far corner of square)
- Calculate and turn toward starting position
- Drive directly back to start (hypotenuse of right triangle)
- Measure final position error
Success criteria:
- Return to within 10 cm of starting position
- Demonstrates both distance and angle accuracy
This challenge tests:
- Forward distance measurement accuracy
- Turn angle accuracy
- Integration of distance and angle to calculate return path
- Cumulative error over multi-segment path
Challenge 3: Long Distance Drift
Objective: Characterize error growth over distance
- Drive straight for increasing distances: 1m, 2m, 5m, 10m
- At each distance, measure lateral drift and distance error
- Calculate drift as percentage of distance traveled
- Plot error versus distance to characterize error growth
See Also
- Capability:Optical Odometry - Wheel encoder hardware and pulse counting
- Capability:IMU Sensing - Inertial measurement unit sensors
- Capability:Differential Drive - Two-wheeled drivetrain fundamentals
- Activity:Dead Reckoning Navigation - Using dead reckoning for autonomous navigation
- SimpleBot:Dead Reckoning Implementation - Complete implementation on SimpleBot
- Behavior:Wall Following - Navigation behavior that can correct dead reckoning drift