Attitudes to 'artificial intelligence' and predictive algorithms seem to oscillate between hype and hysteria. The true picture is a good deal more mixed, but as more examples of predictive analytics in government come to light, it's time for some proper oversight.

(With Ali Knott, James Maclaurin and John Zerilli)

Last week, Immigration Minister Iain Lees-Galloway put a hold on the use of a computer-based tool to profile over-stayers. The tool is a ‘predictive analytics’ system: in this case, it learns to predict the likely harms and costs of an overstayer remaining in New Zealand from a set of other facts about that person, using a database of historical cases. Claims that the tool relied on ‘ethnic profiling’ have been denied by Immigration NZ, but its use has still proved highly controversial. We believe this is a good moment to take stock more generally about New Zealand’s use of predictive analytics in government.

Predictive analytics systems are widely used in government departments around the world. However, the public is often unaware of the existence of these systems, and of how they work. New Zealand is no exception. Last year, there was a minor furore when it emerged that ACC uses a predictive tool to profile its clients. Three years ago, there was a larger controversy around a study proposed by the Ministry of Social Development to help build a tool for predicting children at risk of abuseUse of predictive analytics by the Inland Revenuewas also in the news this week.

In the Artificial Intelligence and Law in New Zealand Project at the University of Otago, we have been studying the use of predictive analytics in government. We are convinced there is a place for such systems. They are an invaluable resource for decision makers tasked with making sense of large amounts of data. Used well, they help us to make decisions that square with the facts.

However, we believe there should be more public oversight of predictive systems used in government, and more transparency about how they work. How many predictive systems are currently in use in New Zealand government agencies? We don’t know. It’s not clear if anyone actually does. In the Immigration NZ case, even the Immigration Minister was in the dark: he has only just become aware of the tool, even though it has been in development for 18 months.

Even for those systems we do know about, we only have partial information about how they work, what data they use, and how accurate they are. We are told, for instance, that the Immigration NZ tool is ‘just an Excel spreadsheet’. But many algorithms can be run in Excel: what algorithm is being run in this case? On what data? With what results? And what margin of error?

These questions are particularly pressing now, in the light of the recent scandal surrounding Facebook’s use (and misuse) of personal data. The algorithms under the spotlight for Facebook are also predictive analytics tools: in this case, tools that predict a Facebook user’s personality from what they have ‘liked’ on the site. There are growing calls (which we fully support) to regulate the use of personal data gathered by social media sites.

However, the process of regulating giants like Facebook is likely to be complex: a matter for lengthy international negotiations. In the meantime, there is no reason why New Zealand should not put its own house in order as regards the use of these same tools in its own government. In fact, scrutiny of these tools is of particular importance, because of the huge impact decisions made by government agencies can have in people’s lives—not only in immigration, but in health, social services, criminal justice and many other contexts.

Of course, the use of algorithmic decision tools in the private sector (potential employers, banks, insurers etc) can also have major impact, and might merit a regulatory response of their own. But public sector use could be a good place to start, modelling best practice and ensuring that public funds are spent on projects that are a good fit for purpose.

Our proposal is that an agency should be established in New Zealand to oversee the use of predictive analytics by publicly funded bodies. This agency would publish a complete list of the predictive tools used by government departments, and other public institutions such as ACC. For each system, it would also supply some basic information about its design: which variables constitute its input and output, and which techniques are used to learn a mapping from inputs to outputs.

In addition, it would answer some key questions about the performance of the system, and about its use. Specifically:

  • How well does the system work?(I.e. how often are its predictions correct?) We presume that all predictive tools are evaluated. But the results of evaluations are not always easy to obtain. What’s more, evaluation of government systems is currently piecemeal: there are no standard processes for evaluation and no indication about how frequently evaluations should be carried out. With so much hype around “AI”, it seems likely that there will be a clamour to get on board with it. This may lead to the purchasing of systems that aren’t fit for purpose or which aren’t better than those we have already.
  • Is there any indication the system is biased in relation to particular social groups? Bias is a particular concern for predictive models used in government. In the US criminal justice system, there is evidence that tools for predicting a defendant’s risk of reoffending are biased against black peoplein the errors that they make. In fact there is often a tradeoffbetween biased errors of this kind and overall system accuracy. But the public should be aware of this tradeoff, and there should be an open debate about how to deal with it. In its report published earlier this week, the UK’s Lords AI Committee warnedthat ‘The prejudices of the past must not be unwittingly built into automated systems, and such systems must be carefully designed from the beginning, with input from as diverse a group of people as possible.’ This is no less of a concern here.
  • Can the system offer explanations about its decisions? New Zealanders already have a legal right to access and correct information held about them, but there is a concern that decisions made within the ‘black box’ of an algorithm will lack the transparency needed to allow a correction or challenge if a person thinks that a decision about them is wrong. The Department of Internal Affairs recently proposed a specific right to challenge decisions made by algorithms, along the lines of a right that exists in European law. If something like that were to be effective, though, it would seem to require that some kind of explanation can be given about how those decisions were made. Such explanations are easier to supply for some predictive systems than others, something that should perhaps be a factor in the choice of system design. 
  • How are human decision makers trained to use a predictive system? At the moment, predictive systems in government are used to ‘assist’ human case workers in making decisions: we presume final responsibility always lies with a person.  However, when machines take over some part of a human’s job, the resulting human-machine interaction requires careful scrutiny. It’s important to make sure the human doesn’t fall into ‘autopilot mode’, assuming the machine is always right. This is a recognised problem in cars with semiautomated control, and is a problem in decision-making systems too. The solution is good training of human case workers. There are many areas where human expertise far outweighs computers: understanding language, complex social scenes, subtle nonverbal cues. Human decision makers must continue to rely on evidence from these sources, and to query a system’s predictions when they run against it. 

The exact form of the body that oversees these questions is obviously a matter for further discussion. We envisage a body that advises government departments in the procurement or development of predictive systems, as well as their subsequent evaluation. This body could be part of Statistics New Zealand, which already plays an advisory role in many cases, or it could be delivered as part of the ‘Government as a Platform’ project currently under way at the Department of Internal Affairs. It would also be useful to examine frameworks for managing predictive analytics used in industry—in particular, the recent concept of an ‘analytics centre of excellence’, which is becoming widespread in large companies (and has already motivated government initiatives in Australia).

Whatever approach is followed, we have an opportunity to take leadership in the oversight of predictive analytics tools as they’re used by our own government institutions. This oversight will help to allay public concerns about how these important tools are used in government bodies. And it will be a useful first step in the wider project of regulating how these tools are used in our society more generally. 

The authors would like to acknowledge the generous support provided by the NZ Law Foundation for the Artificial Intelligence and Law in New Zealand Project at the University of Otago.

Comments (7)

by Rich on April 19, 2018

Is there any indication the system is biased in relation to particular social groups?

It may not even need to be.

Take for instance a system designed to detect potential overstayers among visitors. It's quite likely that people from poorer countries are more likely to overstay (see here). However you try and refine the system to remove bias, it's likely to come back to that.

A solution is not to try and use such a system, but to instead allow all visitors that meet some set objective threshold - and decide this based on a trade off between number of overstayers and convenience of tourism and business visits.

by Kyle Matthews on April 19, 2018
Kyle Matthews

Presumably there are two issues in relation to bias, one does the system have bias, and two, is it's bias worse (or different) than the humans it is replacing. An AI system might be biased, but less so than humans, and might be easier to remove bias from versus humans.

by Colin Gavaghan on April 19, 2018
Colin Gavaghan

We certainly don't want to pretend that all of these problems are unique to algo decision-makers, or to hold them to a standard of some impossibly perfect human decision-maker. (Ditto driverless cars, etc). But there may be some differences, including that: (1) algos might be biased in less obvious or otherwise more pernicious ways, and (2) if it's possible to make algos better (maybe even much better) than humans, then we shouldn't be settling for 'no worse.'

Precisely how we go about getting the best from these systems while avoiding the potential pit-falls is something we're still looking at: there are various suggestions on the table, including a "right to an explanation" of algorithmic decisions, a requirement for human supervision, etc. But as a first step, we (or someone!) needs to know what's being used, and how.

by Steven Price on April 20, 2018
Steven Price

There's a little known provision in the Official Information Act (section 23) that allows you to require a government agency to explain the reasons for any decision or recommendation that affects you in your personal capacity (as opposed to a decision that affects lots of people together such as a tax increase). The agency has to provide the reasons for the decision, the findings of material fact, and a reference to the information that those findings were based on. The grounds for withholding such information are quite limited. It would be interesting to see how ACC or IRD or IS would answer such a request if hte decision was based on predictive analytics.

by Charlie on April 20, 2018

Is there any indication the system is biased in relation to particular social groups?

I thought that was the whole point!

As I understand it, it's just a risk analysis tool based on data previously gathered and shoved into a regression analysis system: Add all the factors together to provide a risk of an individual incurring cost or overstaying, thus allowing customs to focus on the more likely candidates.

by Megan Pledger on April 20, 2018
Megan Pledger

I don't think the immigration tool was as complicated as a regression analysis.  

But even if it was a regression analysis, it still provides a risk based on average behaviour, not individual behaviour, so it's inherently unfair to the individual who may be nothing like his/her age/ethnicity peers. 

IMO overstayers should be deported on a first in/first out scenario unless they come to the attention of the authorities sooner e.g.  crime/working on the wrong visa.  


by Antoine on April 28, 2018

I suspect there are more predictive models in Government than you realise, if the term was interpreted reasonably broadly.

I think your proposed agency sounds like a white elephant that would tie up resources for no great gain (except perhaps a few mock-scandalised headlines).

I think the point of potential risk is not _predictive models_, but _decisions_. If we are to scrutinise something, it should be situations in which Government makes decisions that affect people. The addition of a predictive model to such a situation does not necessarily increase the risk of bad consequences (e.g. there can be discriminatory bias even if no model is used).







Post new comment

You must be logged in to post a comment.