Businesses are increasingly adopting artificial intelligence (AI) tools to support workforce decisions in areas such as hiring and retaining high-performing employees.
But to successfully deploy and maximise the productivity benefits of these AI tools, employers should address potential concerns to ensure the technology does not exacerbate biases or inequalities, produces fair and accurate results, and does not unduly compromise worker privacy.
The US government’s role in this process should be to encourage AI adoption and establish guardrails to limit harms, not to impose precautionary regulations that inhibit innovation, according to a new report from the Center for Data Innovation.
“The dominant narrative around AI is one of fear, so policymakers need to actively support the technology’s growth,” says Hodan Omaar, policy analyst at the Center for Data Innovation and author of the report. “It is critical for lawmakers to avoid intervening in ways that are ineffective, counterproductive, or harmful to innovation.”
The Center’s report describes how precautionary regulations ranging from bans on specific types of technologies to opt-in requirements impose unnecessary costs, limit innovation, and slow adoption. To more effectively encourage responsible use of AI for workforce decisions, the report enumerates eight policy principles:
- Make government an early adopter of AI for workforce decisions and share best practices: National, subnational, and local governments should promote broad adoption of AI in the workforce, reducing the risks associated with AI and encouraging others to adopt and invest in the technology.
- Ensure data protection laws support the adoption of AI for workforce decisions: Governments should ensure data protection laws align with their AI goals by reducing unnecessary regulatory costs and avoiding undermining important data uses.
- Ensure employment nondiscrimination laws apply regardless of whether an organisation uses AI: Regulators should review and clarify how existing laws apply to AI solutions to ensure employers comply with these AI laws.
- Create rules to safeguard against new privacy risks in workforce data: Policymakers should create data privacy legislation to generally allow employers to collect and use biometric data, encouraging innovation in the use of AI for the workforce while also restricting certain potentially invasive uses without consent.
- Address concerns about AI systems for workforce decisions at the national level: AI tools must abide by broad state data protection laws, creating unnecessary and unreasonable compliance costs for businesses and threatening the viability of the national market for AI tools. Policymakers should address policy questions at the national level through comprehensive federal data protection legislation that preempts state data laws.
- Enable the global free flow of employee data: Countries should hold employers accountable for managing the data they collect, regardless of where they store or process it.
- Not regulate the input of AI systems used for workforce decisions: Countries that wish to see the rapid growth of AI for workforce decisions and ensure that these systems have sufficient, representative data to perform accurately should avoid regulating the data sources these AI systems use.
- Focus regulation on employers, not AI vendors: Employers are best suited to ensure that the AI systems they use operate as intended and identify and rectify harmful outcomes, as employers make the most important decisions about how their systems impact workers, not vendors.
“AI can help businesses recruit and hire employees faster while also retaining valued employees and ensuring fair compensation,” says Omaar. “Because the overwhelming majority of AI applications for making decisions about the workforce benefit the economy, businesses, and workers, governments should encourage responsible adoption and use of this technology.”
Follow us and Comment on Twitter @TheEE_io