Skip to main content

AI Containment Research Group

Areas of research

The AI Containment Research Group focuses on the following areas of ethical and societal research:

Location of key ethical issues

  • There is currently intense focus on the ethical consequences of artificial general intelligence (AGI), equipped with autonomous agency and raising the possibility of artificial maleficence, perhaps even constituting an existential risk. On the other hand, it is also often argued that such concentration on relatively remote, if extreme, risks serves only to obscure the more immediate dangers posed by artificial intelligences even if they do not yet achieve full generality (notably in disinformation, privacy (see archive) and bias).
  • There is also a third emerging level of AI, namely that of what Mustafa Suleyman calls artificial capable intelligence (ACI), which would not be able to outplan humans but would greatly enhance the harmful potential of forms of artificial intelligence already being deployed in areas of moral and societal sensitivity.
  • Current activities: Defining ACI as a specific research area, artificial sub-agency, 'hierarchical' safeguards.

Open sourcing

  • As illustrated by the ouster and immediate restoration of Sam Altman at OpenAI, there has been considerable focus on the proper nature of AI governance, and in particular the viability of the open source and not-for-profit paradigms. On the one hand, an end to open sourcing would have a chilling effect on innovation in the sector and concentrate power to an unwelcome in a small number of major players, but, on the other, there is a severe risk of potent computational capacities falling into the hands of bad actors.
  • Current activities: Asilomar for AI? Bad actor detection/exclusion by underlying AI.

Human synergy

  • There are many persistent deficiencies of automated systems, in particular in connection with societally sensitive decision taking processes. It is often argued that the introduction of humans-in-the-loop will help to make such processes fairer. However, given the non-transparency of modern AIs, there may be limits to the possible value of human intervention and sometimes the involvement of humans might actually make matters ethically worse. Is this debate significantly changed by the new powers of advanced LLMs, and in general what is the optimal interrelationship between the contribution of human and artificial cognitive structures?
  • Current activities: Relative strengths (one-shot)/weaknesses (cognitive bias) of human intelligence, developmental AI, the centaur model.

Threats to democracy

  • There is already evidence that AI is being recruited extensively by all political groupings to promote their partisan preferences. To some extent, this is an inevitable and even benign process, giving an audience to previously unheard voices. However, AI can also clearly be used to spread disinformation, hate speech and fake news in general and thereby pose a direct threat to the proper functioning of democracy. We investigate the appropriate and inappropriate uses of AI in the political domain and the available tools for maximising the former and minimising the latter.
  • Current activities: Possible collaboration with companies (e.g. Whisp or pressure groups Hope not Hate and More in Common.

Values-based development

  • Intense discussion is currently raging about the extent to which values should be built into AI design from the start. This, however, in turn raises many deep further questions. First of all, exactly what values should be baked in and who should determine whether they are adequately reflected? Secondly, how should the balance of interest be struck between the competing claims of the industry, the state and individual citizens?
  • Current activities: Assessment of state-of-the-art, weapons of hypermath destruction.

Regulation

  • Both the United States and the European Union are forging ahead with regulatory implements, but there is far from universal consensus as to the scope and optimal design of such instruments. In the UK, there is an initiative to investigate the ethical and regulatory questions surrounding AI based primarily on the task force set up by the British government in the wake of the Bletchley Summit. The Centre will focus on how developments in all three jurisdictions can be usefully compared and, to some extent, harmonised, especially in the light of the UK rejoining the Horizon programme and the European Innovation Council.
  • Current activities: Audit of the EU AI Act from the containment perspective.

More information

Members

Projects and funding

  • To be confirmed.