Computer Programming Is a Dying Art

Computer Programming Is a Dying Art

Writing code is a terrible way for humans to instruct computers. Lucky for us, new technology is about to render programming languages about as useful as Latin.

The headlong global frenzy to teach programming in schools is coming about 20 years too late. Boston, New York, Estonia, New Zealand and a whole lot of other places are going crazy for coding courses, egged on by, which is backed by Mark Zuckerberg and Bill Gates. It’s sacrilegious in tech circles to say so and might get me disinvited to parties with the cast of Silicon Valley, but learning a programming language could turn out to be fruitless for most kids.

We’re approaching an interesting transition: Computers are about to get more brainlike and will understand us on our terms, not theirs. The very nature of programming will shift toward something closer to instructing a new hire how to do his or her job, not scratching out lines of C++ or Java.

Using a made-up language to talk to computers goes back to the 1950s, when IBM scientist John Backus created FORTRAN. Since computers then had less processing power than an earthworm, it was much easier for humans to learn ways to give instructions to computers than it was to get computers to comprehend humans. Over the next six decades, programming languages got increasingly sophisticated. But computers are still like Parisian waiters who refuse to listen if you don’t speak French: They take direction only in their own language.

Finally, it looks as if that will change. A couple of developments illustrate how. One comes out of the Defense Advanced Research Projects Agency (DARPA), the military’s science lab. Later this year, DARPA is going to launch a program called MUSE (Mining and Understanding Software Enclaves). “What we’re trying to do is a paradigm shift in the way we think about software,” DARPA’s Suresh Jagannathan tells Newsweek.

The first step is for MUSE to suck up all of the world’s open-source software—hundreds of billions of lines of code—and organize it in a giant database. The thinking is that of the 20 billion lines of code written each year, most of it repeats something that lots of programmers all over the globe have already done. MUSE will assemble a massive collection of chunks of code that can perform almost any task anybody could ever think of and tag all the code so it can automatically be found and assembled.

One outcome of MUSE, Jagannathan explained, is that someone who knows nothing about programming languages will be able to program a computer. Just tell MUSE what you want a computer to do. If MUSE can understand the intent, it can find the code to carry out that task. Of course, someone has to write the raw code, just as some farmer has to grow the wheat that winds up in the box of rigatoni at Safeway. But, as happened with farmers, far fewer coders will be needed as existing code gets repurposed. Theoretically, just one person on the planet will have to write the raw code that makes a computer perform a certain task.

In the end, far more people will be able to program without knowing code. They’ll just need good higher-level design thinking so they can clearly, logically explain the computer’s task.

MUSE will take years to spring to life, just as self-driving cars are only now getting perfected, a decade after DARPA issued its grand challenge to develop them. But MUSE sets in motion the emergence of non-coding programming. It should be working at least by the time today’s grade-schoolers get into the workforce.

IBM Research chief John Kelly likes to say we’re at the dawn of an era of brain-inspired cognitive computers—and the beginning of the end of the 60-year reign of programmable computers that required us to tell them what to do step by step. The next generation of computers will learn from their interactions with data and people. In another decade or so, we won’t program computers—we’ll teach them.

IBM’s Jeopardy-winning Watson will someday be seen as an early, rough version of cognitive machines. Numenta, run by PalmPilot inventor Jeff Hawkins, developed technology called Grok that learns by recognizing patterns over time, the way brains do. It’s being used, for instance, by Amazon to spot unusual activity on its computers. President Barack Obama’s BRAIN Initiative will contribute to the development of brain-inspired computers. As with MUSE, cognitive technology is years from getting serious, but it’s already moving forward.

In light of all this, what should we teach our kids so they’re ready for this new era? Probably not coding, per se. “There is definitely a need for people to learn kind of a computer science way of thinking about problems, but not necessarily the language du jour,” says Erik Brynjolfsson, a professor at the MIT Sloan School of Management and author of best-selling books about computers that are going to steal our jobs. Says Irving Wladawsky-Berger, formerly of IBM and now at New York University, “We should definitely teach design. This is not coding, or even programming. It requires the ability to think about the problem, organize the approach, know how to use design tools.”

Of course, the world desperately needs coders now and will for the next decade. Zuckerberg and Gates support in part because their companies can’t find enough people who can write C++ or Java or Python or Ruby or whatever language is in vogue. And learning code promotes a type of logical thinking that will always be useful in our age of smart machines.

But in 2030, when today’s 10-year-olds are in the job market, they’ll need to be creative, problem-solving design thinkers who can teach a machine how to do things. Most of them will find that coding skills are about as valuable as cursive handwriting.