Talks and events

Event listings include those organised by BN and other bodies, both in Bristol and beyond. 

 All forthcoming events

  • Complete listing of all upcoming events (details correct at time of entry- please check with event organiser for any late changes)

If you wish to include an event in these listings please email

The Bristol Neuroscience Festival is a biennial event which engages the public on all things neuroscience. The next festival is planned for March 2018.

The Decision Making and Artificial Intelligence reading group is a joint project between the Philosophy Department and the Computer Science Department. It will take place on Wednesdays at 3pm in the Philosophy Department Library (in Cotham House).

The first meeting is on Wednesday 4th October at 3pm. Everyone is welcome to attend, and no prior knowledge of Decision Theory or AI is required! We hope that the group will be of interest to a wide-range of researchers (e.g. computer scientists, philosophers, psychologists, sociologists), and are happy for this notice to be distributed to any interested parties.

The aim of the reading group is to explore some foundational, social, conceptual and ethical questions about AI and decision theory. Rather, than restricting ourselves to a specific text, we will select various papers each week, which explore a variety of topics and questions. Here are some illustrative examples:

  1. How, if at all, can we make formal representations of moral principles so that they can be implemented into artificially intelligent machines?
  2. Do we have good reason to revise our understanding of certain ethical constraints, e.g. the right to privacy, in light of Big Data technology?
  3. Corporations are increasingly using artificially intelligent algorithms to make decisions which affect us. E.g., AIs can be used by banks to decide whether or not to decide whether or not to issue a mortgage to someone. Do individuals have a ‘right to explanation’ for the AI’s decision? And if so, what kind of explanation are they entitled to?
  4. If it can be shown that an intelligent agent can outperform a human (or human-in-the-loop alternative) when making decisions, are there any areas of society that should be protected from full automation?
  5. What does it mean for an intelligent agent to steer/nudge/control a human’s choice behaviour? How does this impact a user’s autonomy? How can the cognitive sciences (e.g. neuroeconomics, social psychology) contribute to these questions?
  6. How should we determine the boundaries of the agent that is responsible for making decisions, in cases of distributed information processing?
  7. What are the limits for the amount of behavioural data that should be collected and input into recommender systems, if the improvements result in decreased search costs, and increased utility for the user?
  8. A significant aspect of decision theory is in determining norms that can guide our choice behaviour towards more rational outcomes. To what extent can machine learning help uncover or refine these norms, and how could an intelligent agent support a human user in making more rational decisions?
  9. To what extent are intelligent agents immune from the cognitive biases that behavioural economists identify as violation of rational choice behaviour in humans?
  10. How could an intelligent agent contribute to a better understanding of some of the normative challenges of welfare economics (e.g. distribution of finite resources)?

For the first session, we are going to discuss Big Data and ‘Nudging’. We will use the following paper as a starting point: Yeung, Karen. "‘Hypernudge’: Big Data as a mode of regulation by design." Information, Communication & Society 20.1 (2017): 118-136. The paper is available at:

Any questions, please feel free to email Geoff and/or Chris on /

Edit this page