Connect with us

Technology

Denmark is using algorithms to dole out welfare benefits

Editor

Published

on

[ad_1]

Lars Løkke Rasmussen Prime Minister of DenmarkFranco Origlia/Getty Images

  • Artificial intelligence and machine learning may promise vast social benefits in governance, however, without regulation, they could damage democracy. 
  • Algorithms are especially useful in welfare states, where benefits can be delivered more efficiently. 
  • For example, Denmark is beginning to use algorithms to make its welfare state more efficient, but it does not seem to fully understand the dangerous potential. 
  • The municipality of Gladsaxe in Copenhagen has quietly been experimenting with a system that would use algorithms to identify children at risk of abuse.
  • But that same technology will inevitably take a toll on privacy, family life, and free speech, and can weaken public accountability on the government. 

Everyone likes to talk about the ways that liberalism might be killed off, whether by populism at home or adversaries abroad. Fewer talk about the growing indications in places like Denmark that liberal democracy might accidentally commit suicide.

As a philosophy of government, liberalism is premised on the belief that the coercive powers of public authorities should be used in service of individual freedom and flourishing, and that they should therefore be constrained by laws controlling their scope, limits, and discretion.

That is the basis for historic liberal achievements such as human rights and the rule of law, which are built into the infrastructure of the Scandinavian welfare state.

denmarkFrédéric Soltan/Corbis via Getty Images

Yet the idea of legal constraint is increasingly difficult to reconcile with the revolution promised by artificial intelligence and machine learning—specifically, those technologies’ promises of vast social benefits in exchange for unconstrained access to data and lack of adequate regulation on what can be done with it.

Algorithms hold the allure of providing wider-ranging benefits to welfare states, and of delivering these benefits more efficiently.

Such improvements in governance are undeniably enticing. What should concern us, however, is that the means of achieving them are not liberal.

There are now growing indications that the West is slouching toward rule by algorithm—a brave new world in which vast fields of human life will be governed by digital code both invisible and unintelligible to human beings, with significant political power placed beyond individual resistance and legal challenge. Liberal democracies are already initiating this quiet, technologically enabled revolution, even as it undermines their own social foundation.

Consider the case of Denmark.

The country currently leads the World Justice Project’s Rule of Law ranking, not least because of its well-administered welfare state. But the country does not appear to fully understand the risks involved in enhancing that welfare state through artificial intelligence applications.

The municipality of Gladsaxe in Copenhagen, for example, has quietly been experimenting with a system that would use algorithms to identify children at risk of abuse, allowing authorities to target the flagged families for early intervention that could ultimately result in forced removals.

The children would be targeted based on specially designed algorithms tasked with crunching the information already gathered by the Danish government and linked to the personal identification number that is assigned to all Danes at birth. This information includes health records, employment information, and much more.

From the Danish government’s perspective, the child-welfare algorithm proposal is merely an extension of the systems it already has in place to detect social fraud and abuse. Benefits and entitlements covering millions of Danes have long been handled by a centralized agency (Udbetaling Danmark), and based on the vast amounts of personal data gathered and processed by this agency, algorithms create so-called puzzlement lists identifying suspicious patterns that may suggest fraud or abuse.

danish prime ministerDanish Prime Minister Lars Lokke Rasmussen gives a speech to open the Smart Country Convention on the digitization of public services on November 20, 2018 in Berlin.TOBIAS SCHWARZ/AFP/Getty Images

These lists can then be acted on by the “control units” operated by many municipalities to investigate those suspected of receiving benefits to which they are not entitled. The data may include information on spouses and children, as well as information from financial institutions.

These practices might seem both well intended and largely benign. After all, a universal welfare state cannot function if the trust of those who contribute to it breaks down due to systematic freeriding and abuse. And in the prototype being developed in Gladsaxe, the application of big data and algorithmic processing seems to be perfectly virtuous, aimed as it is at upholding the core human rights of vulnerable children.

But the potential for mission creep is abundantly clear.

Udbetaling Danmark is a case in point: The agency’s powers and its access to data have been steadily expanded over the years. A recent proposal even aimed at providing this program leviathan access to the electricity use of Danish households to better identify people who have registered a false address to qualify for extra benefits.

The Danish government has also used a loophole in Europe’s new digital data rules to allow public authorities to use data gathered under one pretext for entirely different purposes.

And yet the perils of such programs are less understood and discussed than the benefits.

Part of the reason may be that the West’s embrace of public-service algorithms are byproducts of lofty and genuinely beneficial initiatives aimed at better governance. But these externalities are also beneficial for those in power in creating a parallel form of governing alongside more familiar tools of legislation and policy-setting. And the opacity of the algorithms’ power means that it isn’t easy to determine when algorithmic governance stops serving the common good and instead becomes the servant of the powers that be.

writing sitting outside denmarkStudents of the Royal Danish Academy of Fine Arts, School of Architecture sit along a Copenhagen canal.Frédéric Soltan/Getty

This will inevitably take a toll on privacy, family life, and free speech, as individuals will be unsure when their personal actions may come under the radar of the government.

Such government algorithms also weaken public accountability over the government.

Danish citizens have not been asked to give specific consent to the massive data processing already underway. They are not informed if they are placed on “puzzlement lists,” nor whether it is possible to legally challenge one’s designation. And nobody outside the municipal government of Gladsaxe knows exactly how its algorithm would even identify children at risk.

Gladsaxe’s proposal has produced a major public backlash, which has forced the town to delay the program’s planned rollout. Nevertheless, the Danish government has expressed interest in widening the use of public-service algorithms across the country to bolster its welfare services—even at the expense of the freedom of the people they are intended to serve.

It may be tempting to dismiss algorithmic governance, or algocracy, as a mere continuation of authoritarianism, as represented by China’s notorious social credit systems, which have often been described as the 21st-century manifestation of Orwellian dystopia.

artificial intelligence AI robtos US military and defense tech technology GoogleThe hand of humanoid robot AILA (artificial intelligence lightweight android) operates a switchboard during a demonstration by the German research centre for artificial intelligence at the CeBit computer fair in Hanover March, 5, 2013.Fabrizio Bensch/Reuters

And one-party states do indeed find obvious comfort in using new technologies like AI to consolidate the power of the party and its interests. This conforms to historical examples of dictatorships using newspapers, radio, television, and other media for propaganda purposes while suppressing critical journalism and political pluralism.

But algocracy is not a matter of ideology, but rather technology and its inherently attractive potential. As Denmark makes clear, there are strong temptations for liberal democracies to govern with algorithmic tools that promise huge rewards in terms of efficiency, consistency and precision.

Algocracies are likely to emerge as by-products of governments seeking to better deliver benefits to citizens.

And despite the fundamental differences between China’s one-party state and Danish liberal democracy, the very democratic infrastructure that distinguishes the latter from the former might not be able to fulfil that role into the future.

There are good reasons to think judicial procedures would not be able to serve as a check on the growth of public-service algorithms. Consider the Danish case: the civil servants working to detect child abuse and social fraud will be largely unable to understand and explain why the algorithm identified a family for early intervention or individual for control.

As deep learning progresses, algorithmic processes will only become more incomprehensible to human beings, who will be relegated to merely relying on the outcomes of these processes, without having meaningful access to the data or its processing that these algorithmic systems rely upon to produce specific outcomes. But in the absence of government actors making clear and reasoned decisions, it will be impossible for courts to hold them accountable for their actions.

Thus, algorithms designed with the sole purpose of eliminating social welfare free-riding will almost inevitably lead to increasingly draconian measures to police individual behavior. To prevent AI from serving as a tool toward this dystopian end, the West must focus more on algorithmic governance—regulations to ensure meaningful democratic participation and legitimacy in the production of the algorithms themselves. There is little doubt that this would reduce the efficiency of algorithmic processes. But such a compromise would be worthwhile, given the way that algocracy will otherwise involve the sacrifice of democracy.

Jacob Mchangama is the executive director of Justitia, a Copenhagen based think tank focusing on human rights and the rule of law and the host and producer of the podcast Clear and Present Danger: A History of Free Speech.

Hin-Yan Liu is an Associate Professor at the University of Copenhagen, faculty of law, where he coordinates the faculty’s Artificial Intelligence and Legal Disruption Research Group.

[ad_2]

Source link

قالب وردپرس

Technology

More groups join in support of women in STEM program at Carleton

Editor

Published

on

By

OTTAWA — Major companies and government partners are lending their support to Carleton University’s newly established Women in Engineering and Information Technology Program.

The list of supporters includes Mississauga-based construction company EllisDon.

The latest to announce their support for the program also include BlackBerry QNX, CIRA (Canadian Internet Registration Authority), Ericsson, Nokia, Solace, Trend Micro, the Canadian Nuclear Safety Commission, CGI, Gastops, Leonardo DRS, Lockheed Martin Canada, Amdocs and Ross.

The program is officially set to launch this September.

It is being led by Carleton’s Faculty of Engineering and Design with the goal of establishing meaningful partnerships in support of women in STEM.  

The program will host events for women students to build relationships with industry and government partners, create mentorship opportunities, as well as establish a special fund to support allies at Carleton in meeting equity, diversity and inclusion goals.

Continue Reading

Technology

VR tech to revolutionize commercial driver training

Editor

Published

on

By

Serious Labs seems to have found a way from tragedy to triumph? The Edmonton-based firm designs and manufactures virtual reality simulators to standardize training programs for operators of heavy equipment such as aerial lifts, cranes, forklifts, and commercial trucks. These simulators enable operators to acquire and practice operational skills for the job safety and efficiency in a risk-free virtual environment so they can work more safely and efficiently.

The 2018 Humboldt bus catastrophe sent shock waves across the industry. The tragedy highlighted the need for standardized commercial driver training and testing. It also contributed to the acceleration of the federal government implementing a Mandatory Entry-Level Training (MELT) program for Class 1 & 2 drivers currently being adopted across Canada. MELT is a much more rigorous standard that promotes safety and in-depth practice for new drivers.

Enter Serious Labs. By proposing to harness the power of virtual reality (VR), Serious Labs has earned considerable funding to develop a VR commercial truck driving simulator.

The Government of Alberta has awarded $1 million, and Emissions Reduction Alberta (ERA) is contributing an additional $2 million for the simulator development. Commercial deployment is estimated to begin in 2024, with the simulator to be made available across Canada and the United States, and with the Alberta Motor Transport Association (AMTA) helping to provide simulator tests to certify that driver trainees have attained the appropriate standard. West Tech Report recently took the opportunity to chat with Serious Labs CEO, Jim Colvin, about the environmental and labour benefits of VR Driver Training, as well as the unique way that Colvin went from angel investor to CEO of the company.

Continue Reading

Technology

Next-Gen Tech Company Pops on New Cover Detection Test

Editor

Published

on

By

While the world comes out of the initial stages of the pandemic, COVID-19 will be continue to be a threat for some time to come. Companies, such as Zen Graphene, are working on ways to detect the virus and its variants and are on the forefronts of technology.

Nanotechnology firm ZEN Graphene Solutions Ltd. (TSX-Venture:ZEN) (OTCPK:ZENYF), is working to develop technology to help detect the COVID-19 virus and its variants. The firm signed an exclusive agreement with McMaster University to be the global commercializing partner for a newly developed aptamer-based, SARS-CoV-2 rapid detection technology.

This patent-pending technology uses clinical samples from patients and was funded by the Canadian Institutes of Health Research. The test is considered extremely accurate, scalable, saliva-based, affordable, and provides results in under 10 minutes.

Shares were trading up over 5% to $3.07 in early afternoon trade.

Continue Reading

Chat

Trending