Recently, I was honoured to facilitate a Train the Trainers Workshop to a sizable audience of thirthy one (31) faculty members including professors, Software Engineering, Artificial intelligence (AI), and Human Computer Interaction lecturers at Jomo Kenyatta University of Agriculture and Technology, Kenya. The focus of my presentation was Empowering Responsible AI: Strategies for Evaluating AI Projects in Higher Education Institutions. In this article, I expand upon the discussion points presented in the workshop, delving deeper in to the critical aspects of teaching and assessing responsible AI projects in higher education. The following themes will be discussed.
Now, let's get to the heart of our discussion and explore the essence of Responsible Computing and Responsible AI. For a deeper understanding of Responsible Computing, you can read my articles here (part i, part ii). You can learn more about the AI's transformation of teaching roles and the skills required educator skills in this article.
Defining Responsible Computing and Responsible Artificial Intelligence
Responsible Computing is the practice of designing, developing, and exploiting technology in a manner that prioritizes ethical considerations, societal impact, sustainability, privacy, security, social justice, professionalism, and accountability. Furthermore, Responsible Computing underscores making responsible choices in the design, development, and usage of technology to ensure positive outcomes and avoid harm for individuals and society.
As AI increasingly affects society, managing its ethical and societal risks necessitates responsible computing. This leads to the emergence of a specialized area known as responsible AI. It encompasses the practices of designing, developing, and deploying AI technologies in a manner that ensures fairness, transparency, accountability, social justice, privacy, security, interpretability, explanability, and sustainability.
Several frameworks guide responsible AI, with some particularly relevant to higher education projects. Examples include but are not limited to Microsoft's Responsible AI, Google's AI Principles, IBM's AI Ethics, OECD AI Principles, UNESCO AI Ethics, Mozilla's Trustworthy AI, Mozilla AI Theory of Change, The Aletheia Framework, WEF’s AI Oversight Toolkit for Boards of Directors, AI Ethics Maturity Model, Salesforce, and Artificial Intelligence Ethics Framework for the Intelligence Community, among others. Now, let's examine some of the most pertinent ones to higher education ICT projects.
Guiding Principles of Responsible Computing for Higher Education Projects
It is essential for educators in higher education ICT programs to be highly knowlegable in the guidelines of responsible AI to proficiently guide and evaluate their students' projects. Largely based on Microsoft's Responsible AI, I will now discuss the following.
Fairness – AI should be equitable, unbiased, and serve similarly situated individuals equally.
To use AI responsibly, understanding its purpose and potential risks is essential, with a focus on inclusivity and harm prevention. Diversity in AI design teams is crucial to mirror global perspectives. Addressing bias requires thorough examination of data sources and structures, supported by tools like Responsible AI Tools for TensorFlow, Fairlearn and Responsible AI Dashboard. Transparent methods in machine learning are necessary, and prebuilt models, such as those from Azure OpenAI Service, must be critically analyzed. Human oversight and domain expertise should guide critical AI decision-making, ensuring AI acts as a supportive tool. Lastly, adopting best practices and analytical methods from leading institutions is vital for detecting and mitigating AI bias.
Reliability and Safety – AI must function consistently, handle unexpected situations safely, and resist manipulation.
Developing and managing AI systems responsibly involves comprehensive strategies to ensure their safety, efficacy, and ethical use. This includes creating audit processes to assess data quality, model suitability, and efficient performance, while ensuring AI systems act as intended. Detailed documentation of system operations, like design specifications and training data insights, is important. Moreover, anticipating and designing for unintended interactions, including security breaches, enhances system robustness. Importantly, involving domain experts in design and development ensures AI's responsible use, particularly in significant decision-making processes. Rigorous testing in various scenarios is vital to anticipate and mitigate unforeseen system behaviors and failures. Tools such as Error Analysis are relevant to diagnose and recognize errors in AI applications. Moreover, AI systems should be designed to integrate meaningful human oversight. Finally, establishing a strong feedback loop with users is key to promptly addressing and rectifying system performance issues.
Privacy and Security – AI should protect personal and business data, ensuring privacy and security in operations.
To maintain privacy and security in AI systems, it is essential to adhere to data protection, privacy, and transparency regulations. This can be achieved by investing in developing compliance technologies and processes as well as working with technology experts during the design and development of AI technologies. Moreover, AI systems should be designed to responsibly handle personal data, using it only when necessary and removing it when no longer needed. Security measures must be in place to protect against unauthorized access and cyber threats, with mechanisms for detecting and preventing malicious activities. Additionally, systems should provide users with control over their data usage and ensure anonymity by effectively anonymizing collected data. Regular privacy and security audits, alongside research into and adoption of industry best practices for data management and auditing, are essential to uphold these standards.
Inclusiveness – AI must be accessible and beneficial to a diverse range of individuals, including those with disabilities.
To ensure accessibility and inclusiveness in technology, comply with legal requirements that require the procurement of accessible technology. In addition, leverage resources like the Inclusive 101 Guidebook to identify and address potential barriers that could unintentionally exclude people. Testing systems with individuals who have disabilities is crucial to gauge their usability for a wide audience. Finally, adhering to recognized accessibility standards helps make systems usable for people of all abilities.
Transparency – It's important for people to understand the process by which AI systems arrive at decisions that impact human lives.
Transparency in AI, particularly its intelligibility, is crucial for understanding AI systems' behaviors and their underlying components. This understanding helps stakeholders identify issues like performance gaps, safety risks, privacy violations, biases, and unintended outcomes. It's important for users of AI systems to be transparent about their deployment methods. To enhance transparency, it’s necessary to disclose key dataset characteristics, ensuring they fit the intended use. Simplifying models and providing clear explanations of their operations, possibly through tools like the Responsible AI Dashboard, are also critical steps. Additionally, training employees to accurately interpret AI outputs and maintaining their accountability in decision-making processes are essential for responsible AI usage.
Accountability – Designers and operators of AI systems must be responsible and accountable for their impact and functioning.
To uphold accountability in AI systems, establish internal review boards for overseeing and guiding their responsible creation and use, including best practices for documentation and testing. Employees should be trained to responsibly use and manage these systems, with awareness of when technical support is necessary. Maintain expert human oversight in decision-making processes related to AI model execution, ensuring these experts can address and rectify any issues. Implement a clear accountability and governance structure to manage and correct any unfair or harmful behaviors exhibited by the models.
Society and environmental well-being - Consider social, societal, and environmental impact of AI systems. AI systems should be sustainable and environmentally friendly.
Bear in mind that training advanced AI models need a lot of computing resources, resulting in increased energy usage. This can potentially lead to larger carbon footprint. Additionally, the need to store and process vast amounts of data contributes to the expansion of data centers, further contributing to negative environmental impacts. Sustainability can be ingrained in an educational institution's data goverance policy by mandating the integration of environmental considerations in AI research and development, ensuring projects are assessed for their long-term ecological impact. Additionally, reponsible AI policies should promote the use of AI to advance sustainability development goals, encouraging the creation and use of AI solutions that support renewable energy, resource conservation, and waste reduction.
Now that you have understood the guiding principles of Responsible AI. Lets delve into the your role as an educator in guiding and evaluating Responsible AI projects in higher education.
Role as an Educator in guiding Responsible AI projects in higher educational instititutions
Data Governance – It is critical to ensure the ethical, fair, transparent, and accountable use of data in your institution. In so doing, you will need to understand how data is collected, processed, and exploited by your students and or researchers.
Establish clear a data governance policy
Align with a data governance strategy/policy – If there is an existing data governance policy in place at the institution, then maintain the quality and integrity of data.
Training data for the model
Establish internal and external advisory boards – Establishing internal (e.g., ethical committees) and external advisory boards is a strategic move to enhance the governance and oversight of Responsible AI projects. These boards introduce fresh perspectives and expertise, which are essential for nurturing innovation and ensuring that diverse viewpoints are considered in decision-making processes. They play a critical role in driving accountability, ensuring that projects adhere to high standards of transparency, performance, and ethical conduct. Moreover, these advisory boards help uphold the principles of responsible computing by guiding the institution in maintaining ethical practices, data integrity, and stakeholder trust. Finally, their involvement aids in preventing potential issues, and steering projects towards more sustainable and socially beneficial outcomes.
Now that you have understood your role as an educator let's dive into the key considerations for developing a rubric for evaluating responsible AI projects - A foundational approach.
Developing a rubric for evaluating responsible AI projects - A foundational approach
Starting any research project, it's imperative to outline ethical considerations within the project plan. Additionally, students and researchers are expected to draft informed consent forms tailored to their specific studies. These preparatory steps, alongside the research plan, must then be reviewed and approved by an internal ethics committee, ensuring adherence to ethical standards. After completing their research, I recommend having students submit a comprehensive report, which serves as a burden of proof. This report should address specific questions related to their research ethics and methodology, providing evidence and supporting documents in the appendices. Please see the details below, which can be used as a ruberic to enhance student reflection and facilitate educator assessment.
Note well: This rubric, currently in its foundational stage, must undergo testing to validate its effectiveness and relevance in assessing the responsible use of AI systems in higher educational instititions.
1. Ethical Problem Definition
2. Data ethics and privacy
3. Algorithmic Fairness
4. Accountability
5. Sustainability
6. Robustness and Safety
7. Transparency and communication
8. Impact assessment
Note well that tools such as Microsoft Impact Assessment template and Unesco's Ethical Impact Assessment tool can be applicable in this phase.
9. Legal Compliance
10. Continuous Improvement
By addressing these questions, both educators and students/researchers can gain deeper insights into how the existing responsible AI frameworks and data governance policies were utilized to ensure the responsible use of AI systems for societal benefit. This foundational ruberic potentially offers significant advantages for teachers, enhancing their insight into the application of responsible AI principles.
In summary, this article has expanded upon workshop themes to delve deeper into the nuances of teaching and evaluating responsible AI projects in higher education. Key areas explored include defining responsible computing and responsible AI, understanding guiding principles for AI projects in higher education, and delineating the educator's role in guiding these endeavors. Furthermore, developing a rubric for evaluating responsible AI projects emerges as a foundational approach, encapsulating the necessity of structured, ethical, and transparent evaluation methods. These discussions serve as a crucial step towards embedding a culture of responsibility in AI development and usage within academic institutions, ultimately shaping a future where technology aligns with ethical and societal values.
Written by:
Kadian has a background in Computer Science and pursued her PhD and post-doctoral studies in the fields of Design for Social Interaction and Design for Health. She has taught a number of interaction design courses at the university level including the University of the West Indies, the University of the Commonwealth Caribbean (UCC) in Jamaica, and the Delft University of Technology in The Netherlands. Kadian also serves as the Founder and Lead UX Designer for TeachSomebody and is the host of ...