Senator Ben Ray Luján | Sen. Ben Ray Luján Official Website
Senator Ben Ray Luján | Sen. Ben Ray Luján Official Website
Washington, D.C. – U.S. Senator Ben Ray Luján (D-N.M.), Chair of the Subcommittee on Communications, Media, and Broadband, urged National Telecommunications and Information Administration (NTIA) Administrator Alan Davidson to create responsible guardrails around Artificial Intelligence (AI) development, governance, and use.
Senator Luján specifically outlined language equity, artist and consumer protections, and privacy as key areas where responsible regulations are needed. Senator Luján submitted his comments to Administrator Davidson as part of a public comment period on AI system accountability measures and policies.
“It is time for Congress and the Administration to create and implement responsible guardrails around AI development, governance, and use. As Chair of the Subcommittee on Communications, Media, and Broadband and member of the Consumer Protection and Science Subcommittees under the Senate Committee on Commerce, Science, and Transportation, I want to ensure online platforms that use AI models or offer them for consumer use are doing so in a responsible way,” said Senator Luján.
“A responsible AI framework is critical to ensuring that this rapidly advancing technology is used in ways that promote digital equity, creativity, democratic integrity, and economic equity,” Senator Luján continued. “I urge you to use this well-timed and thoughtful docket to support the creation of responsible AI frameworks and principles.”
Full text of the letter is available HERE and below:
Dear Administrator Davidson,
I applaud your commitment to accountable and trustworthy artificial intelligence systems (AI). AI is a critical technology that stands to transform all aspects of society. It is time for Congress and the Administration to create and implement responsible guardrails around AI development, governance, and use. As Chair of the Subcommittee on Communications, Media, and Broadband and member of the Consumer Protection and Science Subcommittees under the Senate Committee on Commerce, Science, and Transportation, I want to ensure online platforms that use AI models or offer them for consumer use are doing so in a responsible way.
In particular, audits or certifications of AI must include transparency, disclosure requirements, and tools to assess and incentivize language equity and protections for artists and consumers.
Language Equity
Any certification, audits, or assessments of artificial intelligence systems must include requirements that the AI performs consistently across languages. There are well-known cross-sector biases in AI systems with respect to race, gender, and other characteristics. With the recent explosion of large language models, language equity is an increasingly important issue that developers and auditors of AI must assess prior to making a model available for commercial or consumer use. Large language models likely do not work as well in low-resourced languages in which there is less high-quality training data, leading to widespread misinformation, disinformation, scams, and fraud amongst immigrant communities.
I introduced the Language-inclusive Support and Transparency for Online Services (LISTOS) Act to improve multilingual large language models, automated decision-making systems, and content moderation practices online to better protect non-English speaking communities. To enhance and cement these protections, I urge you to incorporate language equity and investment questions and requirements in any and all recommendations for audits, assessments, or certifications of AI systems.
Protections for Artists
Powerful generative AI models can create extremely convincing text, images, and audio, making it a powerful tool for not only creators and consumers, but also for fraudsters and scammers. Creative industries, especially music and visual arts, are very concerned about copyright infringement and subsequent market dilution. OpenAI has openly acknowledged that its programs are trained on “large, publicly available datasets that include copyrighted works.” Creating such copies without permission from copyright owners may infringe copyright, and powerful AI models can easily create sound, images, or videos that are indistinguishable from artists’ own voice, name, image, or likeness.
I urge you to include questions and tools in audits of AI that protect artists. These protections should include existing copyright protections, but also recommendations related to how model developers of AI and users can appropriately credit and compensate artists when (1) their art is used to train AI models, or (2) AI-generated outputs are created that “in the style of” artists, leading to name, image, likeness issues or market dilution.
Consumer Protections
Generative AI enables the creation of false content quickly, cheaply, and at scale. AI-generated images and voices are already extremely realistic, sometimes indistinguishable from the real thing, and AI-generated video quality is rapidly improving. AI-generated photos are flooding social media and the internet and are being used to spread false narratives. Scammers are using generative AI to clone individuals’ voices and use those fake recordings to scam family members, a problem so widespread that the FTC recently put out a consumer alert on the issue.
Assessments of AI must pay particular attention to models that can generate content and the intended use cases for those models. I urge you to include questions regarding what types of guardrails AI model developers include in their products to track or prevent users from contributing to fraudulent, false or misleading information related to elections, civil rights or public health, harassing or abusive content, or to clone individuals’ voices or likenesses. For example, do model developers include watermarks in their AI-generated content to provide some form of evidence that the content was AI-generated? Do they include enforceable terms and conditions in generative AI products to prohibit these types of uses?
Privacy
AI poses the same privacy questions as social media and the internet age with increasing urgency. Large AI models are trained using vast amounts of data, much of which is scraped from the internet with no regard for data privacy. Further, there are no standard documentation requirements relating to data sourcing and sensitive information that might be found in datasets, making it difficult to ascertain just how much sensitive or personally identifiable data is in AI models and what might be accidentally revealed. The data users input into consumer-facing AI models like chatbots are also rife for privacy breaches – a ChatGPT leak in March revealed users’ personal and financial data. Tech companies themselves have banned the use of ChatGPT by employees due to fears that OpenAI would access and use sensitive company information input into the chatbot.
Audits and assessments of AI tools must ensure privacy protections. Methods of detecting and masking sensitive and personally-identifiable data, both in training datasets and data input by users after model release, should be a core part of any AI model. AI model developers should also be required to enforce a “right to be forgotten” and provide avenues for individuals and users to request and verify deletions of sensitive and private data.
A responsible AI framework is critical to ensuring that this rapidly advancing technology is used in ways that promote digital equity, creativity, democratic integrity, and economic equity. I urge you to use this well-timed and thoughtful docket to support the creation of responsible AI frameworks and principles.
Original source can be found here.