The Global Forum on Bioethics in Research (GFBR) will hold a two-day meeting in Cape Town, South Africa on 29 & 30 November 2022 on the theme: “Ethics of artificial intelligence in global health research”.
All applications should be sent to email@example.com by 17.00 CET on Friday 17 June 2022, in English.
This notice includes details on the following:
- ABOUT GFBR
- ABOUT THE THEME
- CALL FOR CASE STUDIES
- CALL FOR GOVERNANCE PAPERS
- KEY THEMES AND QUESTIONS
- CALL FOR PARTICIPANTS
- AWARDS: DECISION MAKING AND ELIGIBILITY FOR FUNDING
- CHECKLIST FOR APPLICANTS
If you have any questions about this call, please contact firstname.lastname@example.org.
1. ABOUT GFBR
GFBR seeks to bring researchers, research policy makers and ethicists, among others together to share experiences and promote collaboration around research ethics. The Forum will be built around case study presentations to ensure that discussion of the ethical issues remain grounded in the practical realities of how research is conducted ‘on the ground’, particularly in low resource settings. There will also be a session on governance issues. Compared to traditional meetings, GFBR is unique in that it is limited in size and built around small group discussions of case studies and governance papers that are submitted by participants. The Forum prioritises the participation of colleagues from low- and middle- income countries (LMICs), encourages networking and mentoring, and creates a venue for open and inclusive discussions.
GFBR is seeking three types of participants for the meeting:
- Case study presenters will present their research experiences and the ethical issues that have emerged regarding the use of AI in health research in LMIC settings (examples from HIC will be considered if they show relevance to LMIC settings).
- Governance paper presenters will present on topics such as regulation, policies, guidance, tools and issues associated with ethics and other review and oversight mechanisms.
- Participants will attend the meeting and actively take part in plenary and small group discussions and networking opportunities.
Places are awarded on a competitive basis and successful applicants from LMICs will receive an award to cover the cost of their travel, accommodation and single-entry visa.
All interested applicants should review the information below and submit an application no later than 17 June 2022; case study and governance paper presenters will submit proposals with their applications. All applications will be reviewed by the GFBR Planning Committee and selection will be made on the specific selection criteria (see below). Applicants are not limited to academic researchers; staff from government, non-governmental organizations (NGOs), and private sector organizations are also encouraged to apply if their applications are focused on the theme. If a case study or governance paper is multi-author, and co-authors would like to attend, they must apply separately as participants and state the name of their co-author and title of the proposal in their application.
Selected case study and governance paper presenters will be paired with a member of the GFBR Planning Committee to provide informal mentorship and help them develop their application into a brief paper (2-3 pages) and a PowerPoint presentation. During GFBR, case study and governance paper presenters from around the world will share their presentations and discuss cross-cutting issues, and then participants at the meeting will discuss the challenges and questions raised in both plenary and small group discussion.
This year’s meeting is co-organised with local host, the South African Medical Research Council.
Please note: In light of the COVID-19 pandemic, the meeting date and location will be kept under review. Provisional dates for the Forum are 29 & 30 November.
2. ABOUT THE THEME
Artificial intelligence (AI) is increasingly being used in global health research but frameworks, policy and best practice for the ethical review and oversight of AI-enabled studies is currently lacking. The Forum will discuss how traditional research ethics regulatory frameworks have responded to the rapid advances in AI technology, and what changes are required, including to the role and responsibility of research ethics committees (RECs). It will explore the ethical challenges such as bias, privacy, data provenance and ownership, along with the need for transparency, accountability and engagement during the design and use of AI in global health research. To date, these discussions have predominantly taken place in high-income countries, and low- and middle-income country (LMIC) perspectives have been underrepresented. The Forum will consider the LMIC context where AI has the potential to address critical skills shortages and improve access to care, but where the ethical challenges are made harder due to existing disparities in infrastructure, knowledge and capacity. The Forum will take a multidisciplinary approach to explore how AI technology is being designed and used in health research, reflecting the range of actors involved in this space and the importance of computer scientists and technologists who apply AI for health to understand research ethics frameworks and considerations.
Please read the background paper for further details on the meeting theme and scope.
3. CALL FOR CASE STUDIES
The GFBR organisers are looking for interesting and important real-life cases that are relevant to the theme. The cases could demonstrate the development of good practice; highlight ethical challenges; demonstrate situations in which ethical practice failed; or present unresolved questions for the global community. The organisers encourage cases that address research ethics oversight and cases that address past GFBR topics*. However, the actual topics considered at the meeting will be defined by the case studies that are submitted. In this way, GFBR aims to be responsive to applicants and the issues that they consider most important.
(* For example, mental health research, genomics, research during epidemics, novel trial designs (e.g. adaptive trials). The full list of past topics is here.)
The scope includes case studies that:
- Focus on issues around conducting health research in LMICs. (However, we do not want to exclude case studies from high income countries if there could be valuable lessons to learn, and some parallel or relevant ethical considerations. If your case study relates to a high income country please use the commentary section to draw-out the relevance for research in LMICs.)
- Are from any stakeholder perspective, including ethicists, policy-makers, researchers, clinicians, computer scientists, and healthcare workers.
- Are from any organisational perspective e.g. academic, technology companies, government, non-governmental organisation (NGOs) and public-private partnerships.
- Address the ethical issues associated with the lifecycle of developing, validating and using an AI system in the health research context. This could include:
- Model development:
- Collecting and processing data on which to train the AI algorithm
- Designing and developing the algorithm
- Training the algorithm (e.g. using a ‘test set’ and ‘tuning set’ of data)
- Model validation:
- Using an internal or external test set to validate the algorithm
- Model use in health research. For example:
- Assessing the impact of an “algorithm for intervention” e.g. a prospective observational trial or an interventional clinical trial to evaluate an AI-based clinical tool in a clinical setting, which could include an assessment of how use of the algorithm may change the outcome for the patient, the behaviour of physicians or the patient/ physician relationship.
- Using a validated “algorithm for discovery” in research e.g. to generate hypotheses or answer research questions such as discovering associations in population health data that reveal a new disease group or discovering potential drug candidates.
- In addition, GFBR is open to case studies of research on AI to develop systems with health applications e.g. projects that may be framed as data or computer science developed by technology companies and which are not necessarily characterised as ‘health research’ or subject to research governance requirements.
- Focus on the ethical issues that result from the use of the AI system in global health research. Cases studies on health research that happens to use an AI system but the ethical issues relate to an aspect of the study that is not explicitly tied to the use of AI are not in scope. For example, a case about the large scale and variety of health data required to train an AI system, which may increase privacy and security concerns, would be a stronger case than one that discusses data sharing issues that are more common to other types of non-AI research.
- Model development:
In general, case studies should focus on no more than three ethical issues. We indicate in Section 5 (‘Key themes and questions’) some examples of issues considered important by the GFBR organisers. Please see the background paper for more details on the scope and themes.
Case studies should be 2 pages maximum (excluding references), font Arial size 10.5 (in Microsoft Word or pdf format), clearly articulated in English and contain the following sections:
- Title of case
- Your name, institution and country location
- Brief description of the research project
- Background – relevant facts about the host country/community and disease studied (if disease specific)
- Ethical issues with commentary on each issue
- Conclusions and two recommendations for how to improve the ethics of, and ethical approaches to, using AI in health research. The recommendations can relate to the field broadly or be specific to the case study.
In addition, please provide the following information, in Microsoft Word or pdf document using font Arial size 10.5. Please do not share sensitive personal health information.
- Short CV (2 pages maximum)
If you are unsure about the suitability of a possible case study or would like to discuss your proposal, please email email@example.com.
4. CALL FOR GOVERNANCE PAPERS
We are seeking papers that provide an overview and critique on the full range of governance issues. For example, the scope includes governance papers that:
- Focus on institutional, national, regional or international regulation, guidelines, policy, principles or codes of practice. The paper should speak specifically to the relevance and impact of these document(s) on AI in health research.
- Present issues and initiatives associated with research ethics review (e.g. research ethics frameworks and procedures, components of the review, role and skills of RECs) or technology review, privacy review, etc. that may take place in parallel to the REC review process.
- Discuss other governance bodies, mechanisms or tools (e.g. advisory councils or committees, impact assessments, data sharing or data use policies, research reporting standards).
The governance paper can be either:
- Practical (e.g. discuss gaps in national regulation or issues with research ethics review processes and propose a practical solution such as a new tool or mechanism) or
- Theoretical (e.g. draw on good theory about public-private partnerships in AI health research).
We indicate in Section 5 (‘Key themes and questions’) some examples of issues considered important by the GFBR organisers. Please see the background paper for more details on the scope and themes.
Proposals should be no more than 2 pages maximum (excluding references), font Arial size 10.5 (in Microsoft Word or pdf format), clearly articulated in English and contain the following sections:
- Your name, institution and country location
- Brief description of the context e.g. what aspect of governance are you addressing – regulation, guidance, policy issue, tools, mechanisms, ethics or other type of review etc. Is it national/regional/international?
- Conclusion and recommendations.
In addition, please provide the following information, as a Microsoft Word or pdf document using font Arial, size 10.5. Please do not share sensitive personal health information in your application.
- Short CV (2 pages maximum)
If you are unsure about the suitability of your proposal and would like to discuss it, please email firstname.lastname@example.org.
5. KEY THEMES AND QUESTIONS
Case study and governance papers could address (but are not limited to) one or more of the following questions and should focus on research in LMICs, though examples from HIC will be considered if they show relevance to LMIC settings. Where the questions refer to a ‘researcher’ this includes academic, commercial and government sector and includes those who develop AI systems and those who validate and use them in a health research context. Example case studies and governance papers can be found in this past GFBR meeting programme.
Some of these questions could be addressed as either a case study or a governance paper. You are welcome to decide which format is best to present your experience and ideas or you can contact email@example.com to discuss which format would be best. In general, a case study is appropriate if the ethical issues are associated with a specific research project and practical research experience. A governance paper would be more appropriate if you are providing an analysis of a governance issue and a theoretical solution.
Fairness and equity
- What processes, tools and checks are available to researchers to mitigate and identify data and algorithm bias? Who should be involved in assessing bias and issues of equity during the development and use of AI systems in health research?
- What challenges are faced by researchers in LMICs in developing and managing equitable international collaborations in the field of AI-based health research (with both public and private organisations)? What solutions have been proposed?
- How can ethics and the social sciences be embedded to inform the technical design and development of AI for health research and to mitigate potential unforeseen risks? Are there examples of best practices?
- What opportunities or initiatives are there for increasing the collective leverage of LMICs on data ownership to enhance greater access to training data for AI and to stimulate locally and globally driven AI health research (e.g. shared data platforms, algorithm registers)?
- How can inclusion be promoted during the development and use of AI for health research and what could this look like (e.g. at the level of including a range of different stakeholders and/ or different cultures and perspectives and through training in the ethics of AI health research in LMICs)?
Trust and trustworthiness
- To what extent do current practices for using AI in health research – which were largely developed in HICs – resonate with the culture and values of stakeholders in LMICs (e.g. with respect to how personhood and privacy are conceived)?
- How can relevant values and perspectives in specific LMIC settings be identified and incorporated to foster and ensure ethical design of AI and the prioritisation of research that is most relevant to those settings?
- Are there unique features of AI health research that demand new approaches to consent, privacy and security (e.g. auditable e-consent or broad consent processes)? What new approaches have been used and what issues can weaken these approaches (e.g. power imbalances between data collectors and those who provide data)?
- How do design issues for e-consent for AI-enabled research impact the role of consent as a safeguard of autonomy? Are there examples of successful designs?
Transparency and engagement
- How does the use of AI in different research settings influence or change the way researchers should think about doing engagement and what practical approaches have been proposed or tested (e.g. basic research vs clinical research vs population research)?
- Who should be engaged during health research that uses an AI system (e.g. representatives from marginalized groups, local patient populations, communities more broadly etc.). When should they be engaged, how and for what purpose (e.g. for setting priority topics to explore, to inform the design of the algorithm and research, to help identify and mitigate unforeseen risks etc.)?
- What tools and criteria are being used to assess the impact of AI algorithms (e.g. on equity, privacy, human rights and safety)? Should assessment and certification take place during the research process, after deployment or some combination of both, and which stakeholders should be involved?
- To what extent does use of the tool (e.g. an algorithm impact assessment) exhaust a researcher’s ethical responsibility? If not, what else is required?
- What are the roles and responsibilities of stakeholders (e.g. researchers, funders, policy-makers, private industry, journals etc.) in facilitating ethical and equitable development and transparent reporting of AI-based health research? Are there examples of best practice?
- What are current practices in ethical review of AI research and ensuring their ethical conduct within and across countries and settings?
- Specifically, what challenges have RECs faced when reviewing AI-based health research protocols and how were these challenges overcome (e.g. aspects of consent, risk/benefit assessment, privacy concerns, complexity of algorithms etc.)?
- How should traditional research ethics regulatory frameworks be adapted to respond to AI-based health research? (e.g. Should ethics review extend beyond the initial phase and also address other parts of the AI lifecycle? Is new guidance required to re-define the scope of REC review, extending it from the traditional protection of individual interest to also consider and balance societal benefits and risks?)
- How should traditional research ethics procedures be adapted to respond to AI-based health research? (e.g. Should algorithmic impact assessments or other reporting metrics be part of the ethics review process or parallel complementary reviews? In what ways can RECs acquire the necessary expertise to review AI health research – training, expert input etc?)
- What are the current governance structures and processes to support AI-based health research? Are they sufficient or are other governance mechanisms required?
- Which models do different countries use to govern AI-based health research (e.g. self-regulation, regulation, guidance). Does this depend on the type of health research application?
- How can ethical principles be implemented in practice and what methods, processes, and frameworks can be used or are needed for AI researchers to better understand and operationalise ethics within their own research?
- What is required for market authorisation of a new AI-based application and to what extent does this align (or not) with research ethics review requirements?
6. CALL FOR PARTICIPANTS
Who can attend the GFBR meeting?
The majority of participants are selected through a competitive process*. Up to 60 participants will be selected from those eligible who apply by the deadline. We are seeking broad geographical representation, a mix of disciplinary expertise including researchers, clinicians, healthcare workers, bioethicists, policy-makers, health system functionaries, and lawyers, and a combination of people who are early in their careers and leaders in their fields.
Accurate journalistic reporting is essential to ensure that the public are engaged and well informed about the potential benefits and risks of research. For that reason, GFBR will support the participation of up to three journalists from LMICs. The meeting will provide a unique opportunity for talented journalists to network with international experts and forge stronger connections between scientists, ethicists, policy-makers and journalists. Funding support will be provided to LMIC based journalists only.
To apply to attend as a participant or journalist, please provide the following information, in Microsoft Word or pdf document, in English using font Arial, size 10.5. Please do not share sensitive personal health information.
- Complete participant application form
- Short CV (2 pages maximum)
Journalists: Please specify under ‘Your position/title’ that you are applying as a journalist and provide details on the form about:
- Your journalistic experience
- The ways in which you would disseminate the meeting outcomes in your local and regional context, including which media outlets you propose to use and the format of reporting.
(* GFBR also directly invites a number of participants e.g. expert speakers or representatives of key organisations.)
All applications should be sent to firstname.lastname@example.org by 17.00 CET on Friday 17 June 2022, in English. Please specify in the subject line whether you are applying to participate, present a case study, present a governance paper or as a journalist. Please ensure you include all the requested information, as incomplete applications can not be considered (see the Checklist below). Applications received after the deadline will not be considered.
8. AWARDS: DECISION MAKING AND ELIGIBILITY FOR FUNDING
Successful applicants from LMICs who require full funding will receive an award to cover:
- return travel to the meeting (economy airfare and standard ground transportation costs);
- accommodation (2 or 3 nights maximum, including meals);
- a single entry visa (if required).
Participants will be expected to meet all other costs.
The GFBR Planning Committee will select successful candidates (both self-funded and those applying for funded places). The selection committee will consider the following factors when considering the applications:
- Country of origin. We would like to ensure a representative distribution of participants from different regions;
- Background/current area of expertise. Applications will be selected for a diverse representation of many different disciplines, relating to the theme of the meeting;
- Experience of ethical issues related to the use of AI in health research;
- Reasons for attending the meeting. Participants who will be able to actively contribute to the meeting and who expect to achieve impact from the meeting;
- Case study applications and governance papers only: Relevance of the case/paper to the meeting theme and research in LMICs;
- Journalists only: Demonstrated journalistic training and experience and concrete proposals for how the meeting findings will be disseminated, including which media outlets and the format of reporting.
If your case study or governance paper is not selected your name will automatically go forward to be considered in the applications to attend as a participant. Applicants are encouraged to submit a case study or governance paper.
All applicants will be informed of the Planning Committee’s decision by the 12 August 2022. The decision of the committee will be final.
11. CHECKLIST FOR APPLICANTS
Please use the following checklist to make sure you have provided all the requested information in your application, in English.
|Participants and journalists||Complete participant application form|
|Short CV (2 pages maximum)|
|Case study presenter||Complete case study presenter application form|
|Short CV (2 pages maximum)|
|Case study proposal (2 pages maximum, excluding references):
|Governance paper presenter||Complete governance paper presenter application form|
|Short CV (2 pages maximum)|
|Governance paper proposal (2 pages maximum, excluding references):
Planning Committee for GFBR 2022:
- Rachel Adams, African Observatory on Responsible Artificial Intelligence, South Africa
- Joe Ali, Johns Hopkins Berman Institute of Bioethics, USA
- Caesar Atuire, University of Ghana, Ghana (GFBR Steering Committee Member)
- Niresh Bhagwandin, South African Medical Research Council (local host representative)
- Phaik Yeong Cheah, Oxford University and MORU, Thailand (GFBR Steering Committee Member)
- Judy Gichoya, Emory University & DATA Scholar for DS-I Africa, USA
- Armando Guio, Berkman Klein Centre for Internet & Society, Harvard University, USA
- Daudi Jjingo, College of Computing & College of Health Sciences, Makerere University, Uganda
- Katherine Littler, Global Health Ethics & Governance Unit, WHO, Switzerland (GFBR Steering Committee Member)
- Tamra Lysaght, University of Singapore, Singapore
- Daniela Paolotti, ISI Foundation, Italy
- Jay Shaw, University of Toronto Joint Centre for Bioethics, Canada
- Effy Vayena, Health Ethics and Policy Lab, Institute of Translational Medicine, ETHZ, Switzerland
Support for GFBR 2022: Wellcome, the UK Medical Research Council (MRC), the National Institutes of Health and the South African MRC are providing funding for this meeting. The South African MRC is also providing logistical support as GFBR’s local host.
16 May 2022