Questions about Technology

Some questions I’m asking myself about technology:

  1. Can we know whether a technology will have unintended consequences?
  2. How can we know whether a technology claim is bullshit? i.e. is there way to know whether a technology’s direction is likely to pan out?
  3. Is there a way to structure technology so it delivers on a particular social outcome?
  4. When should we adapt to our tech and when should it adapt to society’s demands? Can we ever count on technology to not exceed the bounds we (society) place on it?
  5. When will the rate of change of technology slow down or speed up? Can we better predict it?
  6. Was Marx right? Is capitalism + technology headed for a clash of classes?

Old Laptops and Moore’s Law

Photo by Bruce Christianson on Unsplash

The slow down of technology surprised me this week.

I’m writing this on an 11 inch MacBook Air made in late 2010. I dug it out of a pile of computers that sat unused for years. I wanted to blog and email while on this trip without taking my heavier, large-screen laptop and my experiment with an iPad failed because I like a real keyboard.

I thought about upgrading to the latest MacBook Air and realized that the fastest new ones are only about 3 times as fast, twice the RAM and about the same storage. While that is good, it is not the pace of change we are used to. This machine made nine years ago is still perfectly fine for writing a blog, browsing the web and handling email. Plus it is smaller than today’s machines (Apple discontinued the 11 inch model), making it perfect for travel.

Think of it. Laptops used to be obsolete after three years. This one is fine after nine!

What is happening? Technology is slowing down. The chips used in laptops and data centers are hitting limits imposed by physics. Clever engineers are still figuring out ways to pack more and more transistors into microprocessors, but they are resorting to weirder and weirder techniques.

In the era when Moore’s law had a clear path into the future, the strategy to increasing the size and speed of computer chips was reducing the size of transistors. Smaller dimensions allowed a smaller and smaller amount of electrons to do the work of computer logic – switching from one state to another at speeds of billions of times a second.

Now chip makers are stacking chips, rethinking the design of chips, computers, and transistors. These are all very clever ideas and some of them will work. But they represent many different strategies for overcoming the Moore’s law slow down.

Moore’s law of the past was not just a curve and a prediction. It was a common strategy embraced by many portions of the entire semiconductor industry. As a result, the benefits of increasing wafer size, decreasing feature size, increasing of clock speed, and a gazillion of other innovations were synergistic.

One little implication of the end of Moore’s law is that I am still typing away on this old laptop. A big implication I worry about is consolidation and ossification of the tech industry. I hope to dig into these ideas more and will keep using this old computer to do it when I’m on the road.

AI’s China Problem

Photo by Markus Spiske on Unsplash

AI is increasingly looking like the demented HAL 9000 rather than the Star Trek computer. We can thank China for that.

China has seized on AI as a tool of control. The latest reporting by the New York Times indicates officials have deployed facial recognition specifically designed to profile, track, and control Muslim minorities.

This is different, but related to the threat that China might split the internet. That too is a battle with geopolitical implications, but this one is a greater threat to democracies including our own. China’s embrace of authoritarian technology means we might face those same technologies on our shores. Once a technology is created, it becomes easier to deploy it anywhere. While the first to copy China’s example will be other wealthy autocracies like Russia, Saudi Arabia, those products will become available to any buyer.

What happens when a Sheriff Arpaio type can buy these technologies?

The New Code War

In some ways, this is the inverse of the soft power the US and Europe have enjoyed by creating microcomputers, software, and the internet. Those technologies had ideologies baked into their creation by their libertarian skewing creators. Commenting on a recent Chinese technology trade show, a reporter noted, “If Silicon Valley is marked by a libertarian streak, China’s vision offers something of an antithesis, one where tech is meant to reinforce and be guided by the steady hand of the state.”

We have nothing less than a new war of ideology being fought over the future of technology. Instead of communism v. democracy, this new battle seems to be just a straight up authoritarianism v. liberalism. And the battlefield is code.

The Cold War was a geo-political contest of two countries with different ideologies under threat of a hot nuclear war. This war is about the future of code. We face the threat that our own system of capitalism will undermine our liberal democracies.

Perhaps this is the “new Code War” rather than the “Cold War.” I sense this battle will define our lives for the next many decades.

The New Code War frames a choice. We can seek to build AI and software that is like the computer from Star Trek, a smart assistant to humans that fulfills the vision of liberal democracies. Or we can allow the spread of a demented technology like HAL 9000 that is determined to follow the will of its master, no matter the dissent from mere humans.

Software is special

Boeing airplanes crash killing hundreds. Facebook is exploited to create domestic turmoil. Volkswagen cheats out of emission standards, poisoning millions. We worry about manipulation of voting machines, autonomous vehicles, drones, and any number of devices including our garage door openers. The common thread of these modern anxieties is software.

In the history of human inventions, software is unique. Unlike any other modern invention, software is infinitely malleable. Software is a tool that allows you to make almost anything that can be expressed as rules or logic. That was first done in big calculation machines, then in desktops and smartphones. Now, as processors are deployed into everyday objects, software effects almost every aspect of modern life. Along the way, those software rules went from being really fancy calculators to determine orbits and actuarial tables, to determining the way we communicate, shop, socialize, eat, get electricity, and buy a loaf of bread. Not since the invention of written language have we smart humans come up with such a flexible invention.

It has been said that “We shape our buildings, thereafter they shape us.” It is a way of looking at technology that recognizes that it is a two way street. In the case of buildings, we have a lot of flexibility in the design, but then we have to live with the consequences for decades. Compare that with software, especially modern web based systems like Facebook or Google. These sites are literally changing every day, every hour, with many different versions operating at the same time depending on the user, the location, and time of day. These algorithms are designed to make their human users’ lives easier and more convenient.

Compare this to the state of affairs with a nuclear reactor. There isn’t a lot of flexility in the design of a nuclear reaction. Do it the wrong way and you have catastrophe. Do it the right way and you have a powerful way to make energy. In order to make that technology perform for us, we develop extensive social systems. Organizations responsible for planning, operations, oversight, regulation. Nuclear reactors don’t care if the team responsible for its operation and safety get annoyed at the repetition of their tasks. If a politician decides he wants nuclear reactors in every household, the laws of physics will not adapt to his desires.

“So what?” you might say. Why does it really matter?

The default way of thinking of technology is that it is delivered to us and we have to react to its arrival. Westinghouse designs a nuclear reactor and we live with the consequences. But the reality for software, and the vast variety of things it controls, is it encodes the demands and wishes of those who control it. What if that was different?

The idea that that software is malleable is the beginning of realizing that we, as a society, as a democracy, and as a people, have a role in shaping its future. Today we still accept the software of Facebook, Boeing, Diebold, and Volkswagen is merely a product of those companies and our control over it is limited. Once we internalize the plasticity of software, we will demand more of the software that is increasingly controlling our lives.