Manhattan Distance (L₁-Norm)
Calculator to compute the taxicab distance (L₁) with formulas and examples
Manhattan Distance Calculator
What is calculated?
The Manhattan distance (also called taxicab distance or L₁-norm) is the sum of absolute differences of all components. It corresponds to distance along grid-aligned paths.
Manhattan Info
Properties
Manhattan distance:
- Also called L₁-norm or taxicab distance
- Sum of absolute differences
- Follows right-angled grid paths
- Robust to outliers
Intuition: The distance a taxi in Manhattan must drive when only allowed to follow streets on a grid.
Special cases
|Δx| + |Δy| = number of blocks
Unit ball is a diamond/rhombus
Minimizes sum of absolute deviations
Formulas for Manhattan distance
Basic formula (L₁-norm)
Vector norm
2D formula (taxicab)
3D formula (space)
Weighted form
Median relation
Detailed calculation example
Example: Manhattan([3,4,5], [2,3,6])
Given:
- Point A = [3, 4, 5]
- Point B = [2, 3, 6]
Step 1 - Absolute differences:
- |3 - 2| = 1
- |4 - 3| = 1
- |5 - 6| = 1
Step 2 - Sum:
Interpretation: The Manhattan distance corresponds to the number of "steps" along coordinate axes.
Taxicab visualization
Example: From (1,1) to (4,3) in Manhattan
Grid visualization:
One possible path: 3 right + 2 up = 5 steps
Calculation:
All possible paths:
- 3× right, then 2× up
- 2× up, then 3× right
- Any combination of 3R + 2U
- All have length 5!
Comparison of Lₚ norms
For points [0,0] and [3,4]
L₁ (Manhattan)
|3| + |4| = 7
L₂ (Euclidean)
√(3² + 4²) = 5
L₃ (Minkowski)
(3³ + 4³)^(1/3)
L∞ (Chebyshev)
max(3, 4) = 4
Observation: Manhattan gives the larger distance since no diagonal "shortcuts" are allowed.
Practical applications
Urban planning & navigation
- City block distances
- Taxi routes on grid streets
- Logistics optimization
- Pedestrian routing
Machine Learning
- k-Nearest Neighbors
- Clustering (robust to outliers)
- Feature selection
- Sparse data analysis
Statistics & optimization
- Median calculation
- Robust regression
- LASSO regularization
- Quantile regression
Mathematical properties
Norm properties
- Positivity: ‖x‖₁ ≥ 0, ‖x‖₁ = 0 ⟺ x = 0
- Homogeneity: ‖αx‖₁ = |α|‖x‖₁
- Triangle inequality: ‖x+y‖₁ ≤ ‖x‖₁ + ‖y‖₁
- Dual to L∞-norm: Hölder conjugation
Geometric properties
- Unit ball: Octahedron (3D), diamond (2D)
- Convex: But not strictly convex
- Polyhedral: Unit ball is a polytope
- Non-differentiable: At coordinate axes
Relations to other norms
To L₂-norm:
‖x‖₂ ≤ ‖x‖₁ ≤ √n ‖x‖₂
To L∞-norm:
‖x‖∞ ≤ ‖x‖₁ ≤ n ‖x‖∞
Robustness to outliers
Comparison: median vs. mean
Data without outlier:
Data points: [1, 2, 3, 4, 5]
Mean (L₂): 3.0
Median (L₁): 3.0
Both equal
Data with outlier:
Data points: [1, 2, 3, 4, 100]
Mean (L₂): 22.0
Median (L₁): 3.0
Median remains stable!
Conclusion: The L₁-norm (Manhattan) is more robust to outliers because the median principle is less sensitive to extreme values.
Algorithmic aspects
Efficiency and implementation
Time complexity:
- Computation: O(n) linear
- k-NN search: O(n·m) for m points
- Median search: O(n log n)
- Advantage: No squaring or root
Numerical stability:
- Overflow-resistant: Only additions
- Integer-friendly: Exact for integers
- Monotonic: No oscillations
- Sparse-friendly: Many zeros remain