Whilst technology is rapidly evolving, artificial intelligence (AI) is increasingly being applied in social care settings. To guide its ethical and responsible use, a small group of individuals supported by Digital Care Hub and the Oxford Institute for Ethics in Artificial Intelligence have developed a set of guiding principles.
These principles are designed to aid social care commissioners and service providers in navigating the opportunities and challenges that AI presents. They have been carefully considered to ensure they are sensitive to the experiences of service users.
There are seven core principles:
- Truth
- Transparency
- Equity
- Trust
- Accessibility
- Humanity
- Responsiveness
The project incorporates the FAIR human rights framework, which promotes structured ethical decision-making through four steps: Facts, Analysis of rights, Identification of responsibilities, and Review of actions. This model helps ensure that AI tools are not only effective but also aligned with the values and needs of individuals and communities.
The project will continue to develop over time. If you are interested in being part of the conversation, please contact hello@digitalcarehub.co.uk
Further Information
- Read more about the different ethical principles and what they entail.
- Find out about the Oxford Project: The Responsible use of Generative AI in Social Care.
- Read our Generative AI Myth Buster.
Image depicts Digital Care Hub logo