Artificial Intelligence And The Battle For Privacy Rights

14 / July 2016,

Artificial Intelligence And The Battle For Privacy Rights

Artificial intelligence, long in the realm of science fiction and dystopian visions of the future, pushes further into our reality every day. Algorithms that Google, Facebook, and many other tech companies run are moving along their asymptotic paths toward approximating the neural firings and pathways that make the human brain so powerful. In doing so, though, they use astounding amounts of data that raise concerns from governments and private citizens about the extent to which privacy rights are compromised.

As technology advances exponentially, the world is starting to grapple with the logistic and ethical considerations AI has started to raise.

The Growth of AI

AI as a technological marvel serves both as a response and as a way to capitalize on the explosion of big data in the last twenty years. Companies and individuals functioning online create information on levels that would have been inconceivable a generation ago. In some ways, you can envision AI as a defensive strategy. There is simply too much data for people to process, so inventing technological ways to deal with it is similar to creating irrigation systems to adjust to a drought; it finds a way to overcome limitations and handle what needs to be done.

The concerns for AI, though, come in the use of the data by some companies for more offensive corporate strategies. Facebook, for example, uses information it collects to tailor user experiences to their interests, locations, work, and many other factors. Similarly, Google uses search data and other information to deliver with surprising accuracy information relevant to
its users. In both cases, technology hurtles closer each day to true AI, and it does so to create market advantages based on what it collects from and about people.

EU Regulation and US Privacy Concerns

Many people use these companies’ products in part because of the ease and convenience they provide. On the other hand, growing numbers of people are becoming concerned that allowing virtually unfettered collection and use of customer data destroys any privacy rights that people may have. Revelations in recent years about government collection of data has made many wary of this infringement. The level of information that is available, and used for government surveillance or commercial interests, is indeed staggering.

In April 2016, the European Union created new regulations for data storage, use, and protection. These are not set to take effect until 2018, and in the interim we can expect significant battles on whether and how they should apply. For instance, the regulations establish a right of citizens to request links not pull up their information online, with the EU empowered to levy large fines if companies do not meet the requests.

Abstract electric circuit digital brain,technology concept

Perhaps more troubling to companies that rely on AI as market differentiators, they prohibit automated decision-making that “significantly affects” citizens in the UK. The regulations define this to include technology that evaluates work performance, personal preferences, location, movements, personal behavior, economic situation, or health—a broad list that may capture much of what Google and Facebook do. Further, it gives citizens a right to demand explanation of how and why the technology works to come to the decisions it does.

All of this seems to encompass activity that falls directly into the wheelhouse of what AI does for these companies, and for many other developing tech companies. When companies target ads to individuals, they do so by processing data through AI technology—by analyzing personal information and using it to help advertising succeed. Their use of this information, unless the regulations diminish greatly in the next two years, may become illegal in the EU.

The United States, too, is grappling with the potential privacy concerns AI and big data represent. The Eric Snowden revelations about the NSA spy program shined a bright spotlight on government collection of data, but emerging trends in private data collection suggest tensions as well. In particular, while Americans tend to appreciate the convenience these strategies provide, they get squeamish at the idea that so much information is collected and used without their being able to see it or understand how and why it occurs.

The U.S. has not yet moved toward the kind of regulatory overhaul proposed in the EU. Nonetheless, it is examining the importance of this issue. In May 2016, the Obama Administration put out a report, “Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights,” that outlines the potential and the pitfalls of AI-like programs already in operation as well as under development. The report talks about the promise of this technology to fight discrimination and better people’s lives in tangible ways. On the other hand, it also cautions about faulty algorithmic logic and biased information collection and use. The inputs at issue, and the purposes behind the development of the technology, can have powerful impacts on the effects produced.

Future Considerations

The largest companies developing and using AI technologies must address concerns raised everywhere, and are undoubtedly formulating ways to distinguish what they do enough t scienceo avoid having to cease their most profitable practices. With two years to work with, both to combat the EU regulations before they take effect and to create technological work-arounds, it is likely Facebook and Google will be just fine. Regulation has always run behind technology, in almost every arena. AI developers will create plans and paths to evade catastrophic consequences from these actions.

That said, the sweeping approach the EU has taken will certainly reverberate through the tech industry. AI is no longer the province only of a few huge players; Silicon Valley startups and multinational corporations use this technology at different levels to compete in the market.Movements to control big data will necessarily impact companies that use it, whether for ad targeting, general consumer research, or thousands of other applications.

Going forward, companies will need to find ways to describe and explain better how their AI technology works, with an increasingly skeptical audience for those explanations. Companies that find ways to marry effective technology with openness toward regulators and consumers alike will emerge as the new leaders in the years to come.