A book to read while you’re stalled in traffic

What’s the cause of traffic congestion? Many people have a quick answer.

Traffic congestion? Obviously, there are too many cars.

Traffic congestion? That just means there isn’t enough road space.

Traffic congestion? It’s all those cyclists and streetcars getting in the way

With 45 years experience as a driver, 35 years as an everyday cyclist and seven years working in road construction, I’d like to think I’ve learned something about coping with – not to mention causing – congestion. But I’ve never had a day of formal education in traffic engineering or town planning.

Road Traffic Congestion, published by Springer in April 2015, 401 pages, $99 ebook, $129 hardcover

Road Traffic Congestion, published by Springer in April 2015, 401 pages, $99 ebook, $129 hardcover

So I opened Road Traffic Congestion: A Concise Guide with the hopes that it would offer methodical, realistic ways to look at both the causes of traffic congestion and its relief.

With its 400 pages of conciseness, this manual discusses the relationship between transportation technologies, the causes, characteristics and consequences of congestion, and the pros and cons of a wide range of relief strategies.

So is the problem too many cars, or not enough road? The experts open the book with a diplomatic dodge of this loaded question: “Congestion in transportation facilities – walkways, stairways, roads, busways, railways, etc. – happens when demand for their use exceeds their capacity.” (The mention of walkways and stairways notwithstanding, there is little attention given to foot-powered transportation, and with some notable exceptions in the closing chapters, the traffic discussed is car and truck traffic.)

Still in the opening chapters, Falcocchio and Levinson hint at another direction for investigation: “When growth in economic activities significantly outpaces the growth in transportation infrastructure investments, cities experience congestion to levels that make mobility difficult.” Would they make an evidence-backed argument, I wondered, that all the post-World War II investments in freeways, suburban arterials and parking lots have been disproportionately small?

But the book provides no real economic analysis, either of the comparative economics of different modes of transportation, nor the relationship of transportation infrastructure to the economy as a whole.

What the authors do provide is a systematic cataloguing of the ways in which traffic gets backed up on our roads, with examples from across the continent. To an outsider, their discussion illustrates both the strengths and the limitations of current traffic engineering practice.

Whether discussing backups associated with closely spaced traffic lights on a main arterial, or backups around a non-standard intersection with five spokes, the focus remains on finding ways to reduce the delay for cars and trucks. This is not to suggest that the authors are unaware of safety issues for cyclists and pedestrians; they are careful to note possible hazards for non-motorists and the need to minimize pedestrian/automobile conflict points.

But in most of the data they marshall from cities across North America, the factors which are measured and worked into formulas are data about vehicles: cars per lane per mile, total vehicle throughput, vehicle minutes of delay, average vehicle speed etc.

Just as significantly, the methods and formulas are applied to traffic moving on a single given street, as opposed to the sum of the traffic moving along a street and the traffic crossing it. For example, there are formulas for calculating how the addition of a signal light will impact traffic throughput on an arterial road – but how will this impact the travelers trying to cross that street? Clearly these are complex relationships, but if we focus only the rate of traffic flow on a given street, how can we know whether our traffic-enhancement strategies on that street are helpful or harmful to the circulation in the whole neighborhood or district?

It’s clear that lots of professional effort has gone into measuring and defining levels of congestion. So I was surprised to see the subjectivity at the heart of so much of the discussion. At what level of crowding does congestion begin? A National Cooperative Highway Research Program Report pegs congestion to “the travel time or delay in excess of that incurred under light or free-flow travel conditions.” Likewise, a 2011 Urban Mobility Report from the Texas Transportation Institute concludes that Chicago and Washington, DC drivers spend 70 hours extra hours each year in congested traffic, using as their baseline the time these same trips would take in “free-flow” conditions.

Falcocchio and Levinson, however, write that

in a large city it is not realistic to travel at free flow speed (or at the posted speed limit) in the peak hour. It is not logical, therefore to compare actual peak hour travel times to free-flow peak hour travel times when free-flow in the peak hour is a practical impossibility in a large city. (emphasis theirs)

While this strikes me intuitively as correct, I hoped they would offer a compelling argument to back up their position. That argument is missing from the book. A clue emerges, however, in their discussion of the flow and lack of flow on inner-city freeways.

By design, a freeway is one of the least complex traffic systems. Many variables that are present on city streets are absent on freeways; there are no traffic lights, no parking spaces, no direct access from driveways, traffic is divided into one-way streams, and only vehicles moving at roughly the same speed are allowed. It’s true that (for the time being) all drivers are human, and thus their reaction times vary and chaos can happen. But there are neat graphs for the typical relationships between density of traffic (in vehicles per lane per mile), vehicle speeds, and traffic throughput (in vehicles per lane per hour).

Traffic throughput on a freeway drops rapidly after density increases and speeds drop below the “critical speed” of 53 mph. This typically happens when density has increased to about 45 cars per lane per mile, with just under 100 feet between cars. Road Traffic Congestion, page 153.

Traffic throughput on a freeway drops rapidly after density increases and speeds drop below the “critical speed” of 53 mph. This typically happens when density has increased to about 45 cars per lane per mile, with just under 100 feet between cars. Road Traffic Congestion, page 153.

These graphs show that up to a point, vehicle speeds slow modestly as traffic gets more dense, while total throughput still increases. When speeds drop below that point – the “critical speed” – the total throughput drops as well. On a typical freeway with design speed of 70 mph, this critical speed is 53 mph, and traffic tends to drop below this speed when density reaches 45 cars per lane per mile.

As the authors note, at this density there is an average of 97 ft between vehicles in each lane. So in pure spatial terms, the freeway is still closer to empty than to full. Yet it has already reached peak efficiency and any additional traffic will cause a drop in throughput – a drop that gets steeper with each increment of additional density.

In other words, a freeway needs to remain mostly empty to stay anywhere close to free-flow, if by free-flow we mean moving at close to its design speed. Not only must most of the space in each lane be unoccupied, but there are extra lanes required for emergency access; interchanges require lots of additional space; and because the freeway provides no direct access to the main city grid or to driveways, even more space is needed for service roads.

This, perhaps, is why it is practically impossible to achieve free-flow traffic at peak hour in a large American city: there simply isn’t the space to allow each and every commuter to drive simultaneously on almost-empty roadways.
And so we return to the question posed at the beginning: is congestion caused by too many cars, or not enough road? Cars will only move freely, Falcocchio and Levinson make clear, if there are fewer of them:

where added capacity is provided, its lasting effect on congestion relief (especially in metropolitan areas exceeding 2 million people) can only be realized by combining it with strategies that reduce the need to travel by car …

And so the last third of the book outlines strategies for reducing vehicle traffic volume: road tolls, market-priced parking, high-occupancy vehicle lanes, flex-time work schedules, provision of park-and-ride facilities, public transit service improvements, and suburban land-use planning geared to maintaining a grid rather than devolving into winding crescents and cul-de-sacs. The brief discussions of walking and cycling are insightful, even though they arise here in the context of improving motor vehicle flow.

Historical spread of congestion out from Central Business Districts. Road Traffic Congestion, page 20.

Historical spread of congestion out from Central Business Districts. Road Traffic Congestion, page 20.

For someone living at the far edge of greater Toronto, Road Traffic Congestion reads as a warning sign. The book illustrates how traffic congestion has spread inexorably beyond center cities to the inner suburbs, suburbs and exurbs. In the strip malls, big-box shopping centers, and curly-maze residential monocultures that are marching out from the city, we see tomorrow’s congestion in the making.

As Falcocchio and Levinson explain, adding turn lanes or a new freeway, fixes that take a few months or a few years, often just push the congestion farther down the road, while land-use policies that favour non-car travel can take a generation to be fully effective:

Although … “smart growth” land use/transportation strategies are effective (in the near term) at the neighborhood scale, significant reductions in regional VMT [vehicle mile traffic] impacts resulting from a change in land use patterns, however, takes a long time …