The UK government faces mounting pressure over proposals to deploy algorithmic surveillance on bank accounts as part of efforts to curb welfare fraud. Concerns are being raised by various advocacy groups, including those focused on disability rights, privacy, and poverty, who fear that such measures could lead to a significant erosion of privacy rights.
A coalition of these groups has expressed their apprehensions in a letter to Liz Kendall, Secretary of State for Work and Pensions. They argue that compelling banks to scrutinize customer accounts for suspicious activity would represent an unwarranted intrusion into personal privacy, potentially impacting vulnerable individuals adversely.
The backdrop to this controversy is the announcement by Keir Starmer of a forthcoming fraud, error, and debt bill. This legislation would require financial institutions to share data indicative of potential benefit overpayments. While details of the bill remain under wraps, the Department for Work and Pensions has emphasized that the government would not directly access bank accounts or employ artificial intelligence for data analysis. Instead, any fraud signals would be manually reviewed by staff.
The initiative is driven by the government’s belief that welfare fraud is evolving and that enhanced legal powers are needed to effectively address it. The proposed data-sharing with banks is projected to save an estimated £1.6 billion over five years.
A previous attempt at similar legislation by the Conservative party failed to pass before the general election. Despite some support from the technology sector and information commissioners, the bill’s provisions on privacy and automated decision-making sparked intense debate.
Critics of the new Labour bill caution that it could lead to indiscriminate monitoring of the entire population’s bank accounts, with only a marginal impact on overall fraud and error reduction. They liken the potential fallout to the Horizon scandal, where faulty software led to wrongful convictions of post office operators.
In response, a DWP spokesperson refuted the claims, assuring that the powers would be exercised with appropriate oversight and that staff would receive rigorous training. They stressed that bank data would not be linked to DWP algorithms and that all fraud signals would undergo thorough human investigation.
This development comes as the use of artificial intelligence expands across government operations, with a significant portion of departments exploring AI applications. However, welfare algorithms have faced scrutiny, notably when DWP software mistakenly flagged over 200,000 individuals for fraud investigations.
As the debate unfolds, the business community and policymakers are keenly observing the implications of integrating AI into welfare systems and the broader consequences for privacy and data security.