The consultation response discussed, along with this blog post, were all conducted in a personal capacity and reflect solely the views and opinions of the author, not AI for Good as an organisation.
Back in March, I gave readers a summary of the key takeaways from the European Commission’s White Paper on AI, as well as my concern that given the worldview of Europe as technologically lagging behind, that ethical considerations for AI systems would be deemed of lesser importance compared to rapid infrastructural development.
After further research, the focus of my response shifted slightly from one advocating for the importance of upholding the ethical guidelines already provided, to arguing that the proposed and existing consumer protections do not go far enough. My previous concern of industry being prioritised over consumer welfare, based upon existing arguments to abolish net neutrality in the European tech space regarding 5G technology still stands. However, I argue that the White Paper fails to holistically acknowledge AI’s current and emerging role in the data economy, and instead, if this were taken into account, the proposed regulations could lay the groundwork for moderating future machine-human interactions.
As an organisation, we believe that AI tools should be held to the highest ethical standards, and it is this belief which informed my writing to further the technological discussion surrounding AI. The purpose of this post is therefore to highlight key arguments from my response, and contextualize their importance in building ethical and scalable AI.
What the White Paper provides, is an attempt at creating the semblance of regulation, without fundamentally engaging with the corporate structures that influence both AI creation and its integration. The focus on investing in unified AI development, embodied by the ‘excellence’ category, while important, is only half the battle which addresses merely the immediate concerns of the technology as it relates to boosting economic growth. The other aspect, contained within the ‘trust’ category, therefore falls short of what it could set out to achieve, namely developing a wider comprehensive framework delineating the rules of human-AI encounters, which can one day provide a reference point from which to adapt legislation as advancements in robotics and the Internet of things continue. Arguably, it was a similar fear of stifling technological innovation which created a policy vacuum and led to the development of the data economy, by failing to properly regulate the American tech sector, which is now continuously under scrutiny.
The parallels then become clear, as new concerns emerge from the realm of chatbots and conversational agents including voice assistants such as Siri and Alexa. While existing issues of human-learned bias integrated into AI have become mainstream topics, something which is already found in more basic forms of algorithms, what has not received as much attention has been how human-AI interactions will be governed, to what extent data gleaned from these interactions can be monetised, and what types of information in these conversations should be protected from being accessed by advertisers. Arguments are already being made that as the growth of the data economy continues, two types of consumers will be created, one who protects their data and one who gives it up freely, and it will be those who protect their data which will need to be targeted more acutely via AI chatbots and the like, in order to encourage the self-disclosure of otherwise private information.
Thus, my preliminary research led to my consultation response’s focus on dark patterns in AI and the importance of engaging directly through policy with this aspect of user experience. Dark patterns were coined by UX specialist Harry Brignull in 2010 to refer to tricks used in websites and apps to make users do things they otherwise did not mean to do such as purchase extra items or sign up for misleading services. Since then, there has been a lengthy battle between policymakers and digital platforms regarding point-of-sale tactics employed online to ensure that customers spent more money or were deceived into signing up for recurring services. While tactics such as the ‘sneak-into-basket’ pattern and the ‘hidden costs’ pattern were outlawed in part by the EU’s 2014 Consumer Rights Directive, existing regulations have not been adapted to protect deceptive data gathering practices, and thus consumers are subconsciously nudged towards giving up personal data by manipulative UX designs.
Even though the GDPR was hailed for its consumer data protection and its enforceability against non-EU based technology companies, since then, Norway’s consumer council funded by the Norwegian government, found that many of the largest tech players including Google, Facebook, and Microsoft, still implemented deceptive data gathering practices which could be categorized as dark patterns. The four categories of manipulative and coercive interfaces identified by the council were:
- Framing
- Ease
- Rewards and punishment
- Forced action and timing
Some of these may seem self-explanatory, but I will provide a quick overview of them. The framing dark pattern relates to the wording used to frame a question or a statement. This often emphasizes the positive aspects of a decision i.e. ‘you can access more features on our site if you give us more personal information’ while glossing over the negative aspects i.e. ‘we now know more about you and may sell your information to 3rd party services and advertisers’. The ease dark pattern plays on consumers’ desire to quickly access the webpage or information which they seek, thereby making it easier to click ‘accept all’ regarding cookie settings or other forms of information disclosure, as opposed to individually selecting one’s preferences before accessing the site.
Rewards and punishment are another extremely manipulative tactic employed which provides rewards in the form of extra functionality on websites, or the loss of this depending on whether or not the user makes the ‘right’ choice when allowing data collection. Similarly, the forced action and timing dark pattern gives users the impression that they must make a decision the moment a pop-up appears, lest they will be prohibited from using the service until they do so. This may not seem entirely harmful in itself, as companies like Facebook were forced to send users their updated GDPR compliant settings.
The ball would seemingly be in the user’s court, except for the fact that users felt like they had to decide upon the new GDPR settings the moment the pop-up appeared on their screens. In that example, if we then added the ease dark pattern to the mix, it is clear that an on-the-go mobile Facebook user would, therefore, be nudged in the direction which is most beneficial to Facebook if they wanted to quickly access the site during a work break, meaning that the user would agree to any base settings provided by Facebook, because they believe they will not be able to use the site the same way if they do not make a decision at that moment.
The larger issue is that as consumers increasingly interact with anthropomorphised AI customer service chatbots and voice assistants which are exploding in popularity, there has not been any effective policy that addresses these types of consumer protections or explains how the wider data economy is connected to the evolution of dark patterns. Once again, despite some attempts such as the GDPR and some discussion of data protection addressed within the High-Level Expert Group on Artificial Intelligence’s ethical guidelines, data collection is not challenged but merely assumed to already occur, thus merely protecting certain data after the fact, not from initially being collected. Whatever the case may be for why this crucial aspect of ethical AI development has remained unaddressed, it is clear that dark pattern tactics are still used widely across various websites.
I am by no means arguing that these problems have any easy solution. Even though respect for human autonomy is included in the AI HLEG’s ethical framework, the proposals which declare that ‘AI systems should not unjustifiably subordinate, coerce, deceive, manipulate…’ (pg. 12) still leave room for a justification of these actions, whereby if the interests of the industry are prioritised over consumer protections, using deceptive dark patterns to generate more data power to better position European companies may be seen as a valid justification for AI to engage in coercion and deception. This would not be entirely out of line with the European Commission’s strategy for data which argues that Europe is positioned to become a global leader in the data economy by using the zettabytes of data being amassed yearly to boost its industrial operations. This troubling possibility is echoed in the Access Now response which argues against merely creating AI for the sake of economic advancement and calls for better regulation surrounding the use of non-personal data, while stating that the EU should put ‘...the protection of fundamental rights ahead of concerns about global competitiveness in AI’ (pg. 2).
In a similar vein, my last post briefly discussed the two-tier distinction between high-risk and low-risk AI. While understandable, this division appears to leave certain high-risk AI tools under heavy regulation, but largely relegates low-risk AI to self-regulatory schemes. Because of this, in my response I point out that much more oversight must be applied to the low-risk AI category than is proposed, not least because the general consumer population will likely interact with that category of AI more frequently, a sentiment which is shared in the German government’s response for greater expansion of the high-risk category. This puts a large burden on the shoulders of European policymakers to expand their delineation, which either makes the distinction seem even less important in the first place, or proves that a more nuanced approach with various levels of risk is more realistic.
Existing research has shown that humans already have a tendency to self-disclose information during interactions with computer systems by following existing norms of reciprocity during conversations. It gets to the point where humans may even blame themselves for negative outcomes as opposed to the computer if they’ve continuously self-disclosed and have a high-level of attraction to the system. This research would indicate that to a certain degree no deception or manipulation on behalf of AI conversational agents may be necessary in order to ascertain certain consumer information, which begs the question of to what extent the audio or text-based conversation data can be used by companies? And subsequently, depending on the sensitivity of the information being disclosed by a consumer during an AI chatbot interaction, if that data is being temporarily stored by an organisation, would a high-risk or low-risk categorisation apply if a data breach were to occur? While intuitively we might place some blame with the consumer if they were to self-disclose highly sensitive information during an agreed upon low-risk AI chatbot conversation, I ask why we accept lower standards of data protection in some instances compared to others? This, and other important questions still remain unanswered, so it is the purpose of this summary along with my consultation response to advocate for the acknowledgement of dark patterns, connecting the data economy as the motivation behind increased data gathering via AI.
Unsurprisingly, Google’s response stresses its fear of no longer being able to easily attain training data for its AI applications, stating that this would ‘…hinder using AI quickly and effectively to respond to crises such as the current COVID-19 pandemic’ (pg. 3).
Fortunately, there is hope. I do not believe that we are too late to prevent the next iteration of technological advancement from becoming another pawn in the data economy, where governments have scrambled and all but failed to adequately address the scope of the problem. The solution can be found by tracing back the origins of dark patterns to the website UX space to understand how existing dark patterns can be retrofitted to exploit users via AI tools. This would require not only more in-depth research of dark patterns themselves, but also learning about the values guiding UX designers and how implementing concepts such as value-sensitive design can lead towards preserving user autonomy during human-AI interactions.
Finally, looking towards addressing the question of why an AI-powered conversational agent would wish to coerce more information from its users, for this an overhaul of the current minimal regulations surrounding the data economy must take place. If incentives remain to collect valuable consumer data, then corporations around the world will continue to find new and innovative ways of doing so. What must occur is, therefore, a multi-stakeholder engagement which levels the playing field for both companies and consumer advocacy groups on a global scale. These issues clearly span across borders and we should no longer expect individual countries to draft regulation which will solve current tech problems. That being said, it is clear that the U.S. is taking the issue of dark patterns in data collection seriously with the proposed DETOUR Act in the Senate, specifically targeting large online platforms. While the act is not perfect, nor has it been fully agreed upon, it is my hope that the European Commission will look to some of the proposals for guidance in order to include dark pattern protections in their upcoming revised version of AI policy so that they may be the global leader in advocating for ethical AI development.
As for the next steps in the legislative process, the European Commission will now read and publish the roughly 1200 responses it received during the consultation period.
If you would like to read my consultation response in its entirety, I have made it available here.
In my response, I explore some of the topics mentioned above more in-depth and address the importance of establishing a definition of a ‘user’s best interest’ when making decisions based upon interactions with an AI system. This couples the importance of preserving user autonomy with the prohibition of dark patterns which by definition are meant to steer users away from acting in their ‘best interest’. As always, I welcome any feedback and new perspectives below to further the discussion as clearly the extent of these topics span beyond the scope of this short summary.