Navigating the Future: Unpacking Biden’s Executive Order on Artificial Intelligence
As we plunge deeper into the realms of innovation, the potential of artificial intelligence to revolutionize industries, streamline processes, and enhance our daily lives is undeniable. However, with great power comes great responsibility, and the widespread integration of AI raises critical questions about its ethical implications and potential negative impacts. President Biden’s executive order on AI, issued on 30 October 2023, signals a long-overdue proactive stance toward addressing these concerns, underscoring the need for a comprehensive strategy to harness the benefits of AI while mitigating its risks.
The executive order, a move toward ethical AI deployment, recognizes the urgency of managing the potential downsides of this powerful technology. As AI systems become increasingly sophisticated, concerns about bias, privacy infringements, and other negative impacts on consumers, workers, and security loom large. The order underscores the importance of ensuring that AI technologies are developed and implemented with transparency, fairness, and accountability in mind.
By prioritizing the responsible and ethical use of AI, the Biden-Harris administration aims to pave the way for a future where AI is a force for good, fostering innovation without compromising fundamental values.
Consumers will be grateful
Legislation plays a pivotal role in shaping the impact of AI on consumers. Clear and comprehensive regulation is essential to ensure that AI systems adhere to safety and ethical standards, protect user privacy, and avoid discriminatory practices. Striking this balance through legislation is crucial to harness the positive potential of AI while mitigating its negative impacts.
Privacy risks are paramount. As AI systems often rely on extensive data collection, this raises the specter of unauthorized access and misuse of personal information. For instance: smart devices with voice assistants may inadvertently record and transmit private conversations that have the ability to cause a lot of distress, anxiety and harm to persons affected. In addition, ethical issues emerge when AI algorithms inadvertently perpetuate biases, such as discriminatory hiring practices, reinforcing existing societal prejudices. These biases also have the ability to undo a lot of the man-made progress in creating a fairer society.
Developments in AI and the deployment of AI in critical systems, such as autonomous vehicles, also raise safety concerns. There is wide debate on whether autonomous vehicles are safer or more dangerous than human-driven cars. Security issues are also prevalent, with AI systems susceptible to exploitation through adversarial attacks, potentially leading to manipulated outcomes in sectors like finance or healthcare.
However when AI is deployed responsibly and ethically, it has the potential to bring about substantial positive impacts for consumers. Improved efficiency and personalization of services, and AI-driven advancements in healthcare, with applications like early disease detection and personalized treatment plans, hold the promise of improving health outcomes and quality of life for individuals. The key lies in striking a balance between leveraging the advantages of AI and implementing safeguards to protect consumers from potential harm.
What does the executive order say?
The executive order was given by President Biden to reduce AI-related risks, specifically towards consumers, workers, and minority groups. In the President’s own words: “to realize the promise of AI and avoid the risk, we need to govern this technology.”
The executive order consists of eight sections:
1. New standard for AI safety and security
Developers of AI systems are required to share safety test results, along with other critical information, with the US government.
2. Protecting Americans’ privacy
AI poses potential threats to personal privacy, including the risk of data breaches, profiling through behavioral tracking, and the potential misuse of facial recognition and biometric data. There is also an increased risk of extracting and exploiting personal data for training AI systems. To address these concerns, the order directs actions such as prioritizing federal support for privacy-preserving techniques in AI development, funding research and technologies that protect privacy, evaluating how agencies handle commercially available information, and developing guidelines for agencies to assess the effectiveness of privacy-preserving techniques, particularly in AI systems.
3. Advancing equity and civil rights
The rise of AI poses a significant risk to equity and civil rights due to the potential for algorithmic discrimination. As AI systems rely on biased training data, they can unintentionally perpetuate and exacerbate existing societal inequalities, leading to discriminatory outcomes in areas such as hiring and criminal justice. To address these issues, there is a pressing need for increased transparency, accountability, and ethical considerations in the development and deployment of AI to ensure fair and just outcomes.
4. Standing up for consumers, patients, and students
AI offers tangible benefits to consumers, enhancing products in terms of quality, affordability, and accessibility. This section of the executive order directs the development of safe and affordable life-saving drugs, and establishing safety programs to address harms related to AI in healthcare. This section also directs the development of AI enabled education tools such as personalized tutoring at schools.
5. Supporting workers
The impact of AI on the labor market includes job displacement through the automation of routine tasks and the polarization of employment. Simultaneously, it creates new opportunities with the emergence of AI-related roles and the expansion of the digital economy. To mitigate negative effects and maximize opportunities, proactive measures must be taken, for example, upskilling, social safety nets, ethical AI adoption, and collaboration between industry and education.
6. Promoting innovation and competition
The executive order aims to maintain America’s leadership in AI innovation, and introduces measures to catalyze AI research in critical areas like healthcare and climate change. The order emphasizes fostering a fair and competitive AI ecosystem, providing small developers and entrepreneurs with technical assistance and resources. Additionally, the order seeks to enhance the presence of highly skilled immigrants in critical AI areas by modernizing visa criteria and processes.
7. Advancing American leadership abroad
The challenges and opportunities related to AI are global, and thus, the executive order
directs actions to expand international engagements, with a focus on creating robust frameworks for the safe and secure deployment of AI. The order emphasizes the acceleration of AI standards development with international partners, ensuring safety, security, and trustworthiness. It also highlights the promotion of responsible AI deployment to address global challenges like sustainable development and safeguarding critical infrastructure.
8. Ensuring responsible and effective government use of AI
The executive order recognizes the potential benefits of AI in government operations, emphasizing its ability to improve efficiency, cut costs, and enhance security. However, it acknowledges the associated risks, such as discrimination and unsafe decisions. To ensure responsible AI deployment, the President directs actions, including issuing guidance for agencies’ AI use with standards to protect rights and safety, improving AI procurement, and strengthening deployment. The order also focuses on expediting the acquisition of AI products and services, fostering a government-wide AI talent surge, and providing training for employees at all levels in relevant fields.
The United States is leading the way
Earlier in 2023 a number of the major global tech companies such as OpenAI, Alphabet, and Meta made a voluntary commitment to implement measures which will make their technology safer. There was also, of course, the famous open call to pause giant AI experiments, signed by notable figures like Elon Musk, Steve Wozniak, and Noah Yuval Harari. All of these initiatives recognize the potential threats to humanity posed by a world in which AI is unregulated. But Biden’s executive order is the “first major binding government action relating to AI.”
The passing of this law indicates the American government’s commitment to fostering the responsible development of AI. It reflects a proactive approach to address the ethical and safety concerns associated with rapidly advancing technologies. By prioritizing the formulation of laws that govern AI, the United States demonstrates its dedication to ensuring that the benefits of AI are harnessed in a manner that safeguards the well-being of its citizens. Although no other government has passed a law related to AI other nations are aware of the risks and are working towards regulating this new and revolutionary technology. For example The European Union (EU) is working on legislation to protect consumers from potentially dangerous applications of AI.
Lawyers, AI, and the executive order
With new laws around the use and development of AI comes more responsibility for law enforcers. Lawyers find themselves facing the imperative of staying on top of the developing landscape of AI laws. Legal professionals must proactively engage with the dynamic regulatory frameworks that govern its deployment and impact. Understanding and interpreting these laws is crucial not only for advising clients on compliance but also for effectively enforcing AI-related regulations. In a time where AI development is rife, legal professionals must be proactive in adapting their expertise to ensure the effective enforcement of laws governing this transformative technology.
But the impact of AI for lawyers is actually twofold. AI presents huge opportunities for lawyers, and can be used in a number of ways to help them do their business in a more efficient manner. Some examples include document review, legal research, contract review and analysis, and more. These applications of AI can help lawyers spend ten times less than they used to spend on these tasks.
AI for good
Many companies are harnessing AI for good, focusing on solutions that address critical challenges and contribute to the betterment of humanity. One prominent example is Google’s DeepMind, which has applied AI to healthcare. DeepMind’s algorithms have been used for early disease detection, disease diagnosis, and drug development.
At Darrow, we also believe in AI for good. Our guiding vision is “frictionless justice,” which we see as a world where people can trust that every legal violation is swiftly discovered, precisely valued, and efficiently resolved. We leverage cutting-edge generative AI to detect legal violations for plaintiff’s attorneys. This enables attorneys to cut down on “lost time” between cases and hundreds of “unbillable hours” spent between cases each year, and enables them to focus on what they do best: fighting for justice.
Setting a precedent for responsible innovation
President Biden’s executive order on AI marks a pivotal moment in the trajectory of technology governance. By recognizing the imperative to regulate AI, the Biden-Harris administration has taken a significant step toward ensuring that this powerful tool is harnessed for the collective good rather than exploited for detrimental purposes. As the first government to proactively address the complexities of AI through legal enforcement, the United States has set a precedent for responsible innovation. Striking a balance between fostering technological advancements and safeguarding ethical considerations, this executive order exemplifies the critical role of regulation in guiding the evolution of AI.
Want to find your next big case? Get in touch.