Monday, May 18, 2020

AI: Maximizing the Potentials, Minimizing the Perils

It’s easy to see both the up- and downsides of artificial intelligence.

Just a few upsides: more accurate medical diagnosis, safe, fast automated vehicles, AI-driven instruction that ever adapts its style and pace based on the student’s ongoing performance.

On the downside, luminaries such as Bill Gates and Elon Musk worry that self-teaching AI computers could get smart enough that humans won’t be able to stop them from nefarious ends.

It would seem that, per Stuart Russell, author of Human Compatible, that we optimize risk/reward if we take two steps: 1. Don’t let the computer “know” the goal of the software. 2. Block the computer from making decisions beyond a certain magnitude—that when implications of a decision go beyond a certain point, human override is required. It’s kind of like the car salesperson who has discretion to give a 10 percent discount, but if more seems required, the boss must approve.

Another fear about AI is that an evil individual or entity could use it to nefarious ends. A few examples: release a murderous virus into the water supply, threaten to close down the electric grid unless paid a zillion dollars, or develop an algorithm for manipulating people into voting for Candidate X. (Whoops, that already pretty-much exists.) Of course, most powerful things, notably nuclear energy, could be used cataclysmically, yet most experts conclude that, rather than prohibit it, it’s wiser to install in-computer and human oversight.

A similarly moderate stance could apply to genomic research. On the upside it could better address such diseases as cancer, diabetes, and cardiovascular disease and create drought-resistant-, high-protein, and insect-repellant crops. Gene editing might eventually be used to create a super-intelligent human. Yes, that person’s brainpower could be used for social good but what if the gene-editing also caused him or her to have to live in physical pain? Or the person could use the hyper-intelligence for personal gain even if it causes great pain to the world. Again, it would seem that regulation, both built-in and legal might yield the risk/reward sweet spot.

What worries me is that such restrictions may move such research to jurisdictions that have looser restrictions. For example, the worldwide consensus has been that for now at least, gene editing be conducted only for research, not clinically. He Jiankui defied that by using CRISPR to edit the embryos of two recently born twin girls in what he said was an effort to prevent them from contracting HIV. Russian scientist Denis Rebrikov is planning to insert a gene into an embryo that would enable deaf couples to produce hearing babies. Geneticist Bing Su has inserted a gene for brain size, which is correlated moderately with intelligence. So, in this big world of ours, amid ever advancing gene editing tools, it seems quite likely that, given the huge stakes—even a country or non-state-actor creating an army of supergeniuses—that restrictions would move some non-complying research underground.

En toto, it’s probably wise to establish restrictions on AI: laws, professional standards, and, more difficult, building-in limitations to AI software: forcing them to switch off when the stakes are great or the implications unclear. Social norms and fear of punishment will facilitate research that has a positive risk-reward ratio while restraining less advisable research. Outright bans would likely yield worse net results, as occurred when religion restricted scientific research in the Dark Ages. It seems we must accept that the perfect is the enemy of the good. Despite the likely excesses, it seems wise to bet on humankind, that, net, we’ll probably derive positive effects from AI. It certainly will be interesting to watch.

No comments: