Manhattan Distance (L₁-Norm)

Calculator to compute the taxicab distance (L₁) with formulas and examples

Manhattan Distance Calculator

What is calculated?

The Manhattan distance (also called taxicab distance or L₁-norm) is the sum of absolute differences of all components. It corresponds to distance along grid-aligned paths.

Input points / vectors

Coordinates separated by spaces

Same number of coordinates as X

Result
Manhattan distance (L₁):
Sum of absolute differences (grid-aligned distance)

Manhattan Info

Properties

Manhattan distance:

  • Also called L₁-norm or taxicab distance
  • Sum of absolute differences
  • Follows right-angled grid paths
  • Robust to outliers

Intuition: The distance a taxi in Manhattan must drive when only allowed to follow streets on a grid.

Special cases
2D city blocks:
|Δx| + |Δy| = number of blocks
Diamond shape:
Unit ball is a diamond/rhombus
Median minimization:
Minimizes sum of absolute deviations

Formulas for Manhattan distance

Basic formula (L₁-norm)
\[d_1(x,y) = \sum_{i=1}^n |x_i - y_i|\] Standard Manhattan distance
Vector norm
\[d_1(x,y) = \|x-y\|_1\] L₁-norm of the difference
2D formula (taxicab)
\[d = |x_2-x_1| + |y_2-y_1|\] Classic taxicab distance
3D formula (space)
\[d = |x_2-x_1| + |y_2-y_1| + |z_2-z_1|\] Extended Manhattan distance
Weighted form
\[d_w(x,y) = \sum_{i=1}^n w_i |x_i - y_i|\] With weights wᵢ
Median relation
\[\text{arg min}_c \sum_{i=1}^n |x_i - c| = \text{median}(x)\] Median minimizes L₁-norm

Detailed calculation example

Example: Manhattan([3,4,5], [2,3,6])

Given:

  • Point A = [3, 4, 5]
  • Point B = [2, 3, 6]

Step 1 - Absolute differences:

  • |3 - 2| = 1
  • |4 - 3| = 1
  • |5 - 6| = 1

Step 2 - Sum:

\[d_1 = 1 + 1 + 1 = 3\]

Interpretation: The Manhattan distance corresponds to the number of "steps" along coordinate axes.

Taxicab visualization

Example: From (1,1) to (4,3) in Manhattan

Grid visualization:

E
S

One possible path: 3 right + 2 up = 5 steps

Calculation:

\[d_1 = |4-1| + |3-1| = 3 + 2 = 5\]

All possible paths:

  • 3× right, then 2× up
  • 2× up, then 3× right
  • Any combination of 3R + 2U
  • All have length 5!

Comparison of Lₚ norms

For points [0,0] and [3,4]
L₁ (Manhattan)
7.000

|3| + |4| = 7

L₂ (Euclidean)
5.000

√(3² + 4²) = 5

L₃ (Minkowski)
4.498

(3³ + 4³)^(1/3)

L∞ (Chebyshev)
4.000

max(3, 4) = 4

Observation: Manhattan gives the larger distance since no diagonal "shortcuts" are allowed.

Practical applications

Urban planning & navigation
  • City block distances
  • Taxi routes on grid streets
  • Logistics optimization
  • Pedestrian routing
Machine Learning
  • k-Nearest Neighbors
  • Clustering (robust to outliers)
  • Feature selection
  • Sparse data analysis
Statistics & optimization
  • Median calculation
  • Robust regression
  • LASSO regularization
  • Quantile regression

Mathematical properties

Norm properties
  • Positivity: ‖x‖₁ ≥ 0, ‖x‖₁ = 0 ⟺ x = 0
  • Homogeneity: ‖αx‖₁ = |α|‖x‖₁
  • Triangle inequality: ‖x+y‖₁ ≤ ‖x‖₁ + ‖y‖₁
  • Dual to L∞-norm: Hölder conjugation
Geometric properties
  • Unit ball: Octahedron (3D), diamond (2D)
  • Convex: But not strictly convex
  • Polyhedral: Unit ball is a polytope
  • Non-differentiable: At coordinate axes
Relations to other norms

To L₂-norm:
‖x‖₂ ≤ ‖x‖₁ ≤ √n ‖x‖₂

To L∞-norm:
‖x‖∞ ≤ ‖x‖₁ ≤ n ‖x‖∞

Robustness to outliers

Comparison: median vs. mean

Data without outlier:

Data points: [1, 2, 3, 4, 5]
Mean (L₂): 3.0
Median (L₁): 3.0
Both equal

Data with outlier:

Data points: [1, 2, 3, 4, 100]
Mean (L₂): 22.0
Median (L₁): 3.0
Median remains stable!

Conclusion: The L₁-norm (Manhattan) is more robust to outliers because the median principle is less sensitive to extreme values.

Algorithmic aspects

Efficiency and implementation

Time complexity:

  • Computation: O(n) linear
  • k-NN search: O(n·m) for m points
  • Median search: O(n log n)
  • Advantage: No squaring or root

Numerical stability:

  • Overflow-resistant: Only additions
  • Integer-friendly: Exact for integers
  • Monotonic: No oscillations
  • Sparse-friendly: Many zeros remain