Move Slowly and Test Things
An interview with Ben Green, PhD Candidate in Applied Math at the Harvard School of Engineering and Applied Sciences, and author of The Smart Enough City: Putting Technology in its Place to Reclaim Our Urban Future
Break, innovate, disrupt, repeat. Startup cultures often celebrate fast, audacious experimentation without consideration for long-term consequences. This attitude is particularly precarious when the market to be disrupted is the very place we live. With the emergence of “smart city” platforms and products, this hubris casts cities as real-time testing labs for new technologies (Think: companies introducing rideshare or dockless services into cities without gaining the necessary city approvals). As a human-centered designer and researcher, I prefer to pursue innovation through collaboration and co-creation. That’s why I was excited to read Ben Green’s new book that synthesizes the shortcomings of taking a tech-centric approach to urban development. I had the chance to chat with Ben and dig into his refreshing perspective on how cities can put their people and problems first, and not get distracted by the promises of untested technologies.
Q: How do you define a “smart enough” city? What would it be like for people to live in a city that’s “smart enough?”
BG: I noticed that in conversations in academia, specifically here at Harvard in my Computer Science Department, people really like to talk about fancy algorithms and what kind of technical model you’ve developed. But in my work for the City of Boston, I saw that what matters most is implementation — identifying exactly what the problem is and how data can shed new light on the problem. This contrasts with the idea of the “smart city,” which is about having the fanciest apps and algorithms.
As a technologist, I’m interested in how tech interplays with other city functions — from urban planning to policing and crime. I’ve worked on or studied urban tech across a lot of city functions. What really stood out to me were the commonalities of where tech efforts went wrong. Whether it was an app to solve civic engagement or a desire to pilot self-driving cars. All of these initiatives were failing for the same reason — cities were treating these challenges as purely technical problems.
It’s easy to think of the “smart enough” city as less ambitious, which is not at all what my vision is. In actuality, if we look at cities only through the lens of tech, we’re setting our sights way too low. Improving urban life and governance by applying tech and non-tech thinking in concert to improve people’s lives is incredibly ambitious and difficult. It may lack the sexy flare of the “smart city,” but we need to be very intentional about pushing back on tech companies and sometimes cities themselves to fully understand and analyze the implications of these new technologies before accepting them.
To me, a “smart enough city” shouldn’t look any different from a progressive, socially just city. The goals of a city — “smart” or not — should not be dependent on tech. And cities shouldn’t let new tech distract them from their fundamental goals.
Q: You’ve come up with the concept of “tech goggles” to explain how tech companies have fallen into the habit of seeing the city as a set of problems that technology can solve. Do you get pushback from technologists?
BG: My work is about recognizing the political implications of technology. People trying to distance themselves from that, who see tech as neutral and separate from implementation and implications are missing the whole picture. What tech companies are doing is inherently political, as political assumptions get baked into the design of their products.
There’s an elegance and simplicity that draws people to mathematics and modeling. People get excited about algorithms because they typically enable you to compute more efficiently. And in some cases, yes, efficiency is desirable. But in the long term, these algorithms have significant flaws and set us on the wrong path moving forward. For technologists, it’s hard to see anything other than tech as changeable. As computer scientists and mathematicians, we’re simply not trained to see other ways to create impact on society. Technologists see the possibility for change as simply being existing conditions plus technology.
Q: You argue that a fundamental flaw in the smart city movement is that new technology introduced to cities gets conflated with innovation. How might cities be empowered to take a more critical lens to tech-centric solutions that they’re being sold?
BG: This is really so central and touches on a lot of things I’m trying to do. The broader goal is to orient cities away from that logic of “will this product or service make me a smart city.” Being a smart city is no longer an obvious win for a mayor. Now projects like Amazon HQ2 and Sidewalk Toronto that are getting a lot of pushback from the public. Tech companies created the dialogue that the smart city is the thing mayors want to be, built a market for it, and are selling it. But instead of being a “smart city,” mayors should have an understanding of problems that pre-date the proposed tech solution and can evaluate tech by asking questions like: Is this an appropriate way to solve this problem? Can we achieve the same exact outcomes without tech? Will this help me improve people’s lives and make a more equitable, just city?
Q: When you were a data analyst for the City of Boston, you became more of an ethnographer than a mathematician. Tell us more about why being a data scientist alone wasn’t enough to solve complex city problems, and how we might enable more interdisciplinary approaches within city halls?
BG: I think I became a lowercase “e” ethnographer. I did it because I needed to understand what was going on at the internal systems and process level. As a computer science student, I didn’t want to just develop a new way to infer properties from data. I’ve always been interested in social impact. This has helped me focus on the impact of a system rather than falling prey to: “Oh, I can just build this really sophisticated system!”
Our team of data scientists at the City of Boston worked across city departments and in this way, we were fundamentally constrained, but in a good way. We would have no impact if we didn’t actually address the needs of the departments. So there’s a balancing act of understanding each department’s problems, listening to what they’re asking for, and then looking at what the data is showing us. We took a human-centered focus in the research and let ourselves first play with data without jumping to quick-fix solutions. It’s easy for people in a data science program to get assigned a problem and then build a model or app around it. The real challenge is showing operational impact — to make jobs easier for frontline workers, to save the city money, and most importantly, to save lives.
Moving forward, I want to spend more time on engineering pedagogy. How can we teach technologists how to design for people? How can we show them how their assumptions impose a world view on the people who adopt tech across diverse socioeconomic and political contexts? Technologists are in the role of affecting policy and practice and can no longer be on the sidelines. We need to train technologists that their job is not just to build models, but actually to be confronted with the people they are developing for and to fully understand the role tech can and can’t play.
One thing that is ingrained in tech is “solutions.” This can be a fun concept in an abstract technical playground, but social systems are much more complex. You’re never going to “solve” mobility or congestion or democracy. Mobility problems are so different, let’s say for a white family in the suburbs than for a single mother in a city. But building “solutions” requires abstracting away a lot of what happens in practice, and because of that, we risk solving a problem for some, while perhaps exacerbating it for others.
Q: In his critique of the smart city, architect Rem Koolhaas wonders why smart cities “only offer improvement” and not “transgression.” It seems like the smart city is one driven by products and solutions, rather than a social ideal that people can aspire towards. What are some ways that the smart city could be more relatable, more human, and therefore, more compelling?
BG: I think that really captures a lot of this. When we’re in the logic of “existing system plus tech,” all we can do is optimize the existing system. We need to strive for things that are difficult, transgressive, and contested — that’s how we make progress. We shouldn’t strive for a society where there’s no conflict.
My early thinking for the book came from historical analogues. Reading the works of rational city planners and modernists like Le Corbusier, Ebenezer Howard, and Robert Moses. Their logic and writing is incredibly similar to how we hear the smart city pitched. There was this universal desire for simplicity — the idea that we can have this seamless, simple city. We’re seeing a continuation of this thinking now as technologists apply methods in science and engineering to solve social problems.
Q: Are there any “deleted scenes” from the book that are dear to you that you’d like to share?
BG: There were two major things I cut out. One was the Chicago Department of Public Health’s initiative to apply machine learning to prevent lead poisoning in newborn babies. They had a statistical model to help identify cases of lead poisoning and inspections, but the Department of Public Health simply did not have enough resources to run enough inspections, and the legal system was not strong enough to hold landlords accountable. In this case, the technical system was actually good, but the limitations were institutional and political.
I also had a section on employment, specifically on rideshare drivers. I interviewed people starting their own worker cooperatives as competitors to Uber. The cooperatives were almost entirely comprised of immigrants who had been taxi drivers and now saw the possibilities for how tech could empower them. They formed a worker-owned, ridesharing platform that wasn’t exploitative. A lot of the architecture and economic models of these systems can be seen as inevitable, but it’s not necessary for a system like Uber to be exploitative or WiFi kiosks like LinkNYC to be a surveillance tool. The architecture of technologies is always a choice.
Q: What was the most delightful discovery you made while researching and writing your book?
BG: One thing that really surprised me was my research on Columbus, Ohio and the Department of Transportation Smart City Challenge. I’m obviously quite skeptical of smart city stuff and I was very much expecting to see the sorts of tech hype I’ve seen in other places. Right off the bat, it became clear that something was different. They were very interested in social justice outcomes, not tech for tech’s sake. I saw a really thoughtful plan, grounded in local context by leaders who understood their city and saw what value tech can play.
One other thing that surprised and delighted me was just how ready people are for a new approach. Every city official I talked to (though there was obviously some selection bias) knew that smart cities were a dangerous distraction and were trying to find an alternative path forward. Five years ago, the story was that cities didn’t understand technology so they needed the tech companies to tell them what to do — now it’s the opposite, where the cities have made great strides in understanding what technology will and won’t be useful, but the tech companies are still selling the same types of systems that aren’t tailored to real people and real problems. I hope that my book can help empower those who are already striving (and will push many more people to strive) for alternatives to smart cities.