AI Act and Casinos: Navigating New Regulations
Data is the new oil, and to extend the metaphor, over the last 10 years we’ve progressed past the robber baron phase into a formalized, professional industry with public accountability. The European Union General Data Protection Regulation (GDPR) took effect in 2018, two years later California followed their lead, and today more than 80 jurisdictions globally have put similar laws in place. Last year the EU passed the AI Act, and Colorado followed in 2 months. AI governance is on the agenda for regulators across the globe.
What lessons can Casino operators take from recent rulemaking? What should operators expect next?
Casino operators have a clear interest in physical security, and governing bodies have carved out both general security exemptions and specific casino-related rules to allow for video and other data collection. However, the regulatory trend which had provided broad rights to collect data is now toward tightening of the use of data once collected. Previously operators could collect data for the primary goal of security, and then reuse that data for virtually any purpose. But regulators globally are imposing broad limits on such repurposing, and this general rulemaking effectively supercedes the special status of casinos because it speaks to specific use of data. Essentially casino operators have special rights to collect data, but general restrictions on how they use it.
AI Governance: Preparing for New Challenges
Forthcoming AI regulations are likely to follow a similar pattern. The EU AI Act lays out a framework of regulation based on the ‘risk’ of AI that is misused or misapplied. All organizations are required to supervise their AI and monitor for bias, and that oversight is largely proportional to the risk level of the AI. There is also a general requirement to monitor for, measure, and improve bias in any AI system – and operators are left responsible to consider what that means in their context.
Operational Improvements and Compliance Workload
Casino operators report high expectations of operational improvements from AI, but compliance teams should likewise expect new workload. Machine learning based systems that are deployed for security functions, such as watching for suspicious behavior, or to check for excluded patrons, will be considered as ‘high risk’ and must have a proportionate level of ongoing supervision. Teams will want to keep records that separate how data is used. For example, video analytics that summarize foot traffic through the casino may be considered ‘minimal risk’ in the AI Act, but the underlying video data would still fall under privacy laws. And if that foot traffic analysis is then used as a baseline to identify anomalies, for example alerting security when a person enters a restricted area at an unusual time, then that heightened purpose would fall into a higher risk category.
Conclusion: Clarity on AI Regulations
Regulation is not prohibition, and the EU AI Act, like the GDPR before it, gives enterprises clarity on the ‘rules of the road’. The potential use of Face Recognition to, for example, reduce money laundering through identity fraud, is a clear societal win. Far from being banned, the AI Act spells out a clear set of common sense requirements for deploying such a system; it needs to be secure, it needs to work, and commercially reasonable efforts must be made to check that these requirements are met in fact.
As it is likely to serve as the template for other regulators, the EU AI Act gives casino operators globally a preview of what steps should be taken as they adopt the technology.