AI lang syne: A look back on 2023 and considerations for 2024 – International Association of Privacy Professionals

The day’s top stories from around the world
Stay on top of the latest AI governance news and developments of the profession.
Original reporting and feature articles on the latest privacy developments
Where the real conversations in privacy happen
Exploring the technology of privacy
A roundup of the top Canadian privacy news
A roundup of the top European data protection news
A roundup of the top privacy news from the Asia-Pacific region
A roundup of the top privacy news from Latin America
A roundup of US privacy news
Talk privacy and network with local members at IAPP KnowledgeNet Chapter meetings, taking place worldwide.
Looking for a new challenge, or need to hire your next privacy pro? The IAPP Job Board is the answer.
Locate and network with fellow privacy professionals using this peer-to-peer directory.
Review a filterable list of conferences, KnowledgeNets, LinkedIn Live broadcasts, networking events, web conferences and more.
Learn how to surround AI with policies and procedures that make the most of its potential by reducing its risks.
Understand Europe’s framework of laws, regulations and policies, most significantly the GDPR.
Steer a course through the interconnected web of federal and state laws governing U.S. data privacy.
Learn the intricacies of Canada’s distinctive federal/provincial/territorial data privacy governance systems.
Develop the skills to design, build and operate a comprehensive data protection program.
Add to your tech knowledge with deep training in privacy-enhancing technologies and how to deploy them.
Introductory training that builds organizations of professionals with working privacy knowledge.
Meet the stringent requirements to earn this American Bar Association-certified designation.
The global standard for the go-to person for privacy laws, regulations and frameworks
The first and only privacy certification for professionals who manage day-to-day operations
As technology professionals take on greater privacy responsibilities, our updated certification is keeping pace with 50% new content covering the latest developments.
Recognizing the advanced knowledge and issue-spotting skills a privacy pro must attain in today’s complex world of data privacy.
The first title to verify you meet stringent requirements for knowledge, skill, proficiency and ethics in privacy law, and one of the ABA’s newest accredited specialties.
Ensures individuals responsible for AI systems can reduce the risks associated with this technology.
Mostre seus conhecimentos na gestão do programa de privacidade e na legislação brasileira sobre privacidade.
Certification des compétences du DPO fondée sur la législation et règlementation française et européenne, agréée par la CNIL.
This tool identifies global data protection authorities and privacy legislation.
The IAPP’s US State Privacy Legislation Tracker consists of proposed and enacted comprehensive state privacy bills from across the U.S.
Access all reports and surveys published by the IAPP.
This report shines a light on the location, performance and significance of privacy governance within organizations.
This year’s Privacy Risk Study represents the most comprehensive study of privacy risk undertaken by the IAPP in collaboration with KPMG.
On this topic page, you can find the IAPP’s collection of coverage, analysis and resources covering AI connections to the privacy space.
IAPP members can get up-to-date information here on the California Consumer Privacy Act and the California Privacy Rights Act.
The IAPP’s EU General Data Protection Regulation page collects the guidance, analysis, tools and resources you need to make sure you’re meeting your obligations.
Explore the full range of U.K. data protection issues, from global policy to daily operational details.
Expand your network and expertise at the world’s top privacy event featuring A-list keynotes and high-profile experts.
A new event in Brussels for business leaders, tech and privacy pros who work with AI to learn about practical AI governance, accountability, the EU AI Act and more.
Leaders from across the Canadian privacy field deliver insights, discuss trends, offer predictions and share best practices.
Hear top experts discuss global privacy issues and regulations affecting business across Asia.
P.S.R. focuses on the intersection of privacy and technology. The call for proposals to speak at the 2024 event is open. Submit your ideas today.
Europe’s top experts offer pragmatic insights into the evolving landscape and share knowledge on best practices for your data protection operation.
Gain exclusive insights about how privacy affects business in Australia and Aotearoa New Zealand.
View our open calls and submission instructions.
Increase visibility for your organization — check out sponsorship opportunities today.
Start taking advantage of the many IAPP member benefits today
See our list of high-profile corporate members—and find out why you should become one, too
Don’t miss out for a minute—continue accessing your benefits
2023 marked a significant shift in artificial intelligence technology and ushered in a flood of laws and standards to help regulate it. Here’s a look at the major AI events of 2023, what may come in 2024 and some practical tips for responding to the challenges and opportunities that lie ahead.
In the U.S., the Federal Trade Commission put businesses on notice that existing laws, such as Section 5 of the FTC Act, the Fair Credit Reporting Act and the Equal Credit Opportunity Act, apply to AI systems. This past year, the FTC brought actions against Ring, Edmodo, and Rite Aid for violative practices involving AI. Its latest action against Rite Aid resulted in an order with requirements such as fairness testing, validation of accuracy, continuous monitoring and employee training. Commissioner Alvaro Bedoya described the order’s requirements as a “baseline” for reasonable algorithmic fairness practices. The FTC has also made clear through its actions this year that it will continue to use model deletion as a remedy.
On 30 Oct. 2023, U.S. President Joe Biden issued the “Executive Order on Safe, Secure, and Trustworthy AI Development and Use of Artificial Intelligence,” recognizing the benefits of the government’s use of AI while detailing core principles, objectives and requirements to mitigate risks. Building off the executive order, the Office of Management and Budget followed with its proposed memo “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.” The OMB memo outlines requirements for government agencies as they procure, develop and deploy AI. Together, the documents call on federal agencies to produce more specific guidance. While the executive order, the forthcoming final OMB memo and agency guidance apply to the federal government, companies providing services to the government will also be subject to these requirements.
There has also been an evolving state, and city, AI policy landscape throughout the U.S. The past year has seen a flurry of regional action on AI. Under state omnibus privacy laws, Colorado finalized rulemaking on profiling and automated decision-making and California proposed rulemaking on automated decision-making technologies. Several other states also passed similar laws that provide an opt-out for certain automated decision-making and profiling and other state and city laws focused on particular applications of AI, including child profiling, writing prescriptions, employment decisions and insurance.
Some states also spent 2023 establishing laws focused on government-deployed AI. For example, Illinois and Texas established task forces to study the use of AI in education and in government systems and the potential harms AI could cause to civil rights. Connecticut also passed legislation establishing a working group on AI and requirements for government. Additionally, in September 2023, Pennsylvania’s governor issued an executive order establishing principles for government-deployed AI.
Beyond the U.S., the EU and other countries and international bodies have also moved to regulate AI systems.
On 8 Dec. 2023, the EU reached political agreement on the AI Act, its comprehensive framework for the regulation of AI. The act scales requirements based on the risk-level of the underlying AI system. It specifically bans certain practices that are an “unacceptable risk,” applies strict requirements to practices that are “high risk,” requires enhanced notice and labeling for systems that use AI and are a “limited risk,” and allows voluntary compliance, such as codes of conduct, for systems that are “minimal risk.” The act applies a separate tiered compliance framework for general-purpose AI models (including certain large generative AI models) with enhanced obligations for models that pose systemic risks. Once the text is finalized, it is expected to come into force sometime in this summer.
Canada also launched a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems in September 2023 and continues enactment efforts on the Artificial Intelligence and Data Act. China had more AI regulation take effect in 2023, including those related to deep synthesis in internet information service and the management of generative AI. Discussions on international standards also took place in 2023 with G7 industrial leaders collaborating through the Hiroshima AI Process yielding principles and a code of conduct for organizations developing advanced AI systems.
2024 will bring more adoption and novel uses of AI tools and systems by government, private entities and individuals. As a result, more legislation and regulatory scrutiny around the uses of AI is expected.
Around the globe, in addition to the EU AI Act taking effect, more countries will likely consider and pass AI laws. As with GDPR, many will model laws on the EU AI Act. Additionally, while Canada’s AIDA regulations may be finalized in the coming year, the provisions of AIDA would not come into effect for another two years.
In the U.S., more states will likely require data protection assessments for profiling and automated decision-making, including in the context of advertising, and possibility of opt-in for profiling as proposed in some pending bills. Several state laws are also being proposed regarding AI in employment contexts, including notice to employees and other restrictions for use in employment decisions and monitoring, requirements for bias and disparate impact analysis, and rights of employees to request information used in context of AI processing. Additionally, more laws and enforcement activities will continue to focus on preventing discriminatory harms in the context of credit scoring, hiring, insurance, health care, targeted advertising, and access to essential services and disproportionate impacts of AI on vulnerable persons, including children.
With so much change coming, it can be hard to know where to focus your current AI governance. Consider the following practical tips as you head into 2024:
1. Develop and update AI processes, policies and frameworks
Have a process in place to keep up to date with changes in AI technologies, laws, use cases and risks. This will help ensure you have up-to-date information to keep policies and frameworks current and compliant.  
Create accountability by designating personnel responsible for your AI program and have a process to train personnel about AI policies and use of frameworks.
In developing policies and frameworks, consider the life cycle of your AI systems and tools, from the data used to train AI models in development, to data inputs and outputs processed in production. Policies and risk assessment frameworks should be updated to identify and address risks specific to AI systems. For example, policies and frameworks should address: securing AI systems and data; incident response procedures; data sourcing practices; data minimization and retention; assessing and monitoring systems for data integrity, bias, safety, and discriminatory or disparate impacts to individuals; assessing consequences, rate and likelihood of inaccurate outputs; and societal harms.
Review external policies and statements about your AI systems and data practices to ensure they align with your policies and properly disclose and accurately reflect information learned through inventories and risk assessments.
2. Put policies to action – conduct AI inventories and risk assessments, monitor vendors
Conduct an inventory of existing AI systems. Identify and document the various AI systems in use, the content and data they process, the outputs they produce, and any down-stream recipients of data or content. Once you have conducted an AI inventory, use this information to conduct an AI risk assessment considering particular risks described above.
Don’t overlook third-party AI solutions and use of AI by third party vendors as part of your assessment. For third party AI solutions, request their AI policies and administer AI due diligence questionnaires. Also consider provenance of data used to develop their AI tools. Review the types of data sets used to train AI algorithms and the types of purposes for which the AI tools were developed and evaluate whether those reflect the types of data and purposes in your intended deployment. And review these tools and your more traditional vendors to learn if your data is being used for their own AI purposes (or others).
3. Leverage existing principles and resources for today and champion flexibility for tomorrow
As organizations grapple with new challenges, changing landscapes and uncertainty posed by AI technologies and regulation, it is easy to get overwhelmed. For areas of uncertainty, you can achieve some clarity and purpose by centering AI governance on your established organizational values and principles. And remember, many AI governance resources already exist.  
Initial AI governance efforts will need to continuously adapt as new technologies, use cases, laws and regulations, and market standards evolve. As a result, AI governance efforts should encourage flexible strategies. For example, using compartmentalization and machine unlearning methods may help businesses retain models when the initial training data becomes unusable or problematic, due to legal or other reasons, without needing to delete and rebuild a model in its entirety. AI professionals should set such expectations for flexibility early and often in 2024 and in the years to come.
Submit for CPEs
If you want to comment on this post, you need to login.
The IAPP is the largest and most comprehensive global information privacy community and resource. Founded in 2000, the IAPP is a not-for-profit organization that helps define, promote and improve the privacy profession globally.
The IAPP is the only place you’ll find a comprehensive body of resources, knowledge and experts to help you navigate the complex landscape of today’s data-driven world. We offer individual, corporate and group memberships, and all members have access to an extensive array of benefits.
© 2024 International Association of Privacy Professionals.
All rights reserved.
Pease International Tradeport, 75 Rochester Ave.
Portsmouth, NH 03801 USA • +1 603.427.9200


Leave a Reply

The Future Is A.I. !
To top