Empowering Responsible AI: Strategies for Evaluating AI Projects in Higher Education Institutions

Thu Apr 18 2024
34
Empowering Responsible AI: Strategies for Evaluating AI Projects in Higher Education Institutions

 

Recently, I was honoured to facilitate a Train the Trainers Workshop to a sizable audience of thirthy one (31) faculty members including professors, Software Engineering, Artificial intelligence (AI), and Human Computer Interaction lecturers at Jomo Kenyatta University of Agriculture and Technology, Kenya. The focus of my presentation was Empowering Responsible AI: Strategies for Evaluating AI Projects in Higher Education Institutions. In this article, I expand upon the discussion points presented in the workshop, delving deeper in to the critical aspects of teaching and assessing responsible AI projects in higher education. The following themes will be discussed.

  • Defining Responsible Computing and Responsible Artificial Intelligence.
  • Guiding principles of Responsible AI for Higher Education Projects.
  • Role as an Educator in guiding Responsible AI projects in higher educational instititutions.
  • Developing a rubric for evaluating responsible AI projects - A foundational approach.

Now, let's get to the heart of our discussion and explore the essence of Responsible Computing and Responsible AI. For a deeper understanding of Responsible Computing, you can read my articles here (part i, part ii). You can learn more about the AI's transformation of teaching roles and the skills required educator skills in this article.

Defining Responsible Computing and Responsible Artificial Intelligence

Responsible Computing is the practice of designing, developing, and exploiting technology in a manner that prioritizes ethical considerations, societal impact, sustainability, privacy, security, social justice, professionalism, and accountability. Furthermore, Responsible Computing underscores making responsible choices in the design, development, and usage of technology to ensure positive outcomes and avoid harm for individuals and society.

As AI increasingly affects society, managing its ethical and societal risks necessitates responsible computing. This leads to the emergence of a specialized area known as responsible AI. It encompasses the practices of designing, developing, and deploying AI technologies in a manner that ensures fairness, transparency, accountability, social justice, privacy, security, interpretability, explanability, and sustainability.

Several frameworks guide responsible AI, with some particularly relevant to higher education projects. Examples include but are not limited to Microsoft's Responsible AI, Google's AI Principles, IBM's AI Ethics, OECD AI Principles, UNESCO AI EthicsMozilla's Trustworthy AI, Mozilla AI Theory of Change, The Aletheia Framework,  WEF’s AI Oversight Toolkit for Boards of DirectorsAI Ethics Maturity Model, Salesforce, and Artificial Intelligence Ethics Framework for the Intelligence Community, among others. Now, let's examine some of the most pertinent ones to higher education ICT projects.

Guiding Principles of Responsible Computing for Higher Education Projects

It is essential for educators in higher education ICT programs to be highly knowlegable in the guidelines of responsible AI to proficiently guide and evaluate their students' projects. Largely based on Microsoft's Responsible AI, I will now discuss the following. 

Fairness – AI should be equitable, unbiased, and serve similarly situated individuals equally.

To use AI responsibly, understanding its purpose and potential risks is essential, with a focus on inclusivity and harm prevention. Diversity in AI design teams is crucial to mirror global perspectives. Addressing bias requires thorough examination of data sources and structures, supported by tools like Responsible AI Tools for TensorFlow, Fairlearn and Responsible AI Dashboard. Transparent methods in machine learning are necessary, and prebuilt models, such as those from Azure OpenAI Service, must be critically analyzed. Human oversight and domain expertise should guide critical AI decision-making, ensuring AI acts as a supportive tool. Lastly, adopting best practices and analytical methods from leading institutions is vital for detecting and mitigating AI bias.

Reliability and Safety – AI must function consistently, handle unexpected situations safely, and resist manipulation.

Developing and managing AI systems responsibly involves comprehensive strategies to ensure their safety, efficacy, and ethical use. This includes creating audit processes to assess data quality, model suitability, and efficient performance, while ensuring AI systems act as intended. Detailed documentation of system operations, like design specifications and training data insights, is important. Moreover, anticipating and designing for unintended interactions, including security breaches, enhances system robustness. Importantly, involving domain experts in design and development ensures AI's responsible use, particularly in significant decision-making processes. Rigorous testing in various scenarios is vital to anticipate and mitigate unforeseen system behaviors and failures. Tools such as Error Analysis are relevant to diagnose and recognize errors in AI applications. Moreover, AI systems should be designed to integrate meaningful human oversight. Finally, establishing a strong feedback loop with users is key to promptly addressing and rectifying system performance issues.

Privacy and Security – AI should protect personal and business data, ensuring privacy and security in operations.

To maintain privacy and security in AI systems, it is essential to adhere to data protection, privacy, and transparency regulations. This can be achieved by investing in developing compliance technologies and processes as well as working with technology experts during the design and development of AI technologies. Moreover, AI systems should be designed to responsibly handle personal data, using it only when necessary and removing it when no longer needed. Security measures must be in place to protect against unauthorized access and cyber threats, with mechanisms for detecting and preventing malicious activities. Additionally, systems should provide users with control over their data usage and ensure anonymity by effectively anonymizing collected data. Regular privacy and security audits, alongside research into and adoption of industry best practices for data management and auditing, are essential to uphold these standards.

Inclusiveness – AI must be accessible and beneficial to a diverse range of individuals, including those with disabilities.

To ensure accessibility and inclusiveness in technology, comply with legal requirements that require the procurement of accessible technology. In addition, leverage resources like the Inclusive 101 Guidebook to identify and address potential barriers that could unintentionally exclude people. Testing systems with individuals who have disabilities is crucial to gauge their usability for a wide audience. Finally, adhering to recognized accessibility standards helps make systems usable for people of all abilities.

Transparency – It's important for people to understand the process by which AI systems arrive at decisions that impact human lives.

Transparency in AI, particularly its intelligibility, is crucial for understanding AI systems' behaviors and their underlying components. This understanding helps stakeholders identify issues like performance gaps, safety risks, privacy violations, biases, and unintended outcomes. It's important for users of AI systems to be transparent about their deployment methods. To enhance transparency, it’s necessary to disclose key dataset characteristics, ensuring they fit the intended use. Simplifying models and providing clear explanations of their operations, possibly through tools like the Responsible AI Dashboard, are also critical steps. Additionally, training employees to accurately interpret AI outputs and maintaining their accountability in decision-making processes are essential for responsible AI usage.

Accountability – Designers and operators of AI systems must be responsible and accountable for their impact and functioning.

To uphold accountability in AI systems, establish internal review boards for overseeing and guiding their responsible creation and use, including best practices for documentation and testing. Employees should be trained to responsibly use and manage these systems, with awareness of when technical support is necessary. Maintain expert human oversight in decision-making processes related to AI model execution, ensuring these experts can address and rectify any issues. Implement a clear accountability and governance structure to manage and correct any unfair or harmful behaviors exhibited by the models.

Society and environmental well-being  -   Consider social, societal, and environmental impact of AI systems. AI systems should be sustainable and environmentally friendly.

Bear in mind that training advanced AI models need a lot of computing resources, resulting in increased energy usage.  This can potentially lead to larger carbon footprint. Additionally, the need to store and process vast amounts of data contributes to the expansion of data centers, further contributing to negative environmental impacts. Sustainability can be ingrained in an educational institution's data goverance policy by mandating the integration of environmental considerations in AI research and development, ensuring projects are assessed for their long-term ecological impact. Additionally, reponsible AI policies should promote the use of AI to advance sustainability development goals, encouraging the creation and use of AI solutions that support renewable energy, resource conservation, and waste reduction.

Now that you have understood the guiding principles of Responsible AI. Lets delve into the your role as an educator in guiding and evaluating Responsible AI projects in higher education. 

Role as an Educator in guiding Responsible AI projects in higher educational instititutions

Data Governance – It is critical to ensure the ethical, fair, transparent, and accountable use of data in your institution.  In so doing, you will need to understand how data is collected, processed, and exploited by your students and or researchers.

Establish clear a data governance policy 

  • Policy clarity – Does your institution or research group have a well-defined data governance policy? Ensure the policy is comprehensive and easy to understand.
  • Data handling guidelines – Are the procedures for sourcing, using, storing, and accessing data explicitly outlined in the policy?
  • Informed consent – Do students and researchers obtain and use informed consents appropriately in their projects, ensuring ethical data collection and use?
  • Security and privacy – How is data protected from unauthorized access or breaches? Include measures for data security and privacy in the policy.
  • Sustainability – Do your ICT projects promote the sustainable AI practices?
  • Compliance and regulations – Ensure the policy complies with local, national, and international data protection laws and regulations. Example GDPR, Data Protection Act 2019, Kenya, Personal Information Protection Act, Korea, The Data Protection and Privacy Act 2019, Uganda, Data Protection Act (Act No. 843) 2012 - DPA, Ghana, and the Data Protection Act 2020 and Electronic Transactions Act No 15 of 2006, Jamaica.
  • Ongoing education – Are there training programs in place to educate educators, students, and researchers about the importance of data governance and their roles in it?
  • Review and update – Is there a mechanism for regular review and updating of the data governance policy to adapt to new challenges, technologies, and legal requirements?

Align with a data governance strategy/policy   If there is an existing data governance policy in place at the institution, then maintain the quality and integrity of data.

  • Understand the existing policy – Familiarize yourself with the current data governance policy of the institution, focusing on its objectives, standards, and procedures.
  • Assess data needs and practices – Evaluate the specific data needs and practices of your department or research team to understand how they align with or diverge from the institution’s policy.
  • Identify Gaps and Areas for Improvement  – Identify any gaps between current data practices and the institutional policy. Determine areas where practices need to be improved or updated to comply with the policy.
  • Implement Changes – Execute the plan, which may involve revising data management processes, implementing new data quality controls, or enhancing data security measures.
  • Monitor and Evaluate Compliance – Regularly monitor data management practices to ensure they comply with the policy and existing laws. 
  • Review and Update – Periodically review the alignment with the institutional data governance policy, especially when there are changes in institutional policies, data technologies, or regulatory laws.

Training data for the model 

  • Curating Data Sets  Lecturers can help gather and curate high-quality data sets for training AI models, ensuring the data is relevant, comprehensive, and representative of the real-world scenarios where the model will be applied.
  • Data training – Lecturers can educate students or researchers in data preparation and management, emphasizing the importance of transparency, quality, accuracy, and reliability in training data. Thereby fostering responsible AI practices for positive social impacts.
  • Ethical oversight  –Teachers play a pivotal role in ensuring that the data used for training does not perpetuate biases or violate ethical norms. They can oversee the data collection and preparation processes to ensure the data adheres to ethical standards.

Establish internal and external advisory boards – Establishing internal (e.g., ethical committees) and external advisory boards is a strategic move to enhance the governance and oversight of Responsible AI projects. These boards introduce fresh perspectives and expertise, which are essential for nurturing innovation and ensuring that diverse viewpoints are considered in decision-making processes. They play a critical role in driving accountability, ensuring that projects adhere to high standards of transparency, performance, and ethical conduct. Moreover, these advisory boards help uphold the principles of responsible computing by guiding the institution in maintaining ethical practices, data integrity, and stakeholder trust. Finally, their involvement aids in preventing potential issues, and steering projects towards more sustainable and socially beneficial outcomes.

Now that you have understood your role as an educator let's dive into the key considerations for developing a rubric for evaluating responsible AI projects - A foundational approach.

Developing a rubric for evaluating responsible AI projects - A foundational approach

Starting any research project, it's imperative to outline ethical considerations within the project plan. Additionally, students and researchers are expected to draft informed consent forms tailored to their specific studies. These preparatory steps, alongside the research plan, must then be reviewed and approved by an internal ethics committee, ensuring adherence to ethical standards. After completing their research, I recommend having students submit a comprehensive report, which serves as a burden of proof. This report should address specific questions related to their research ethics and methodology, providing evidence and supporting documents in the appendices. Please see the  details below, which can be used as a ruberic to enhance student reflection and facilitate educator assessment.

Note well: This rubric, currently in its foundational stage, must undergo testing to validate its effectiveness and relevance in assessing the responsible use of AI systems in higher educational instititions. 

1. Ethical Problem Definition

  • Relevance - Is the AI addressing a problem with significant ethical considerations?
  • Consent - Are there clear protocols for informed consent regarding data usage?

2. Data ethics and privacy

  • Data acquisition - Was the data collected in an ethical manner, ensuring privacy and consent?
  • Data anonymization - Were appropriate measures taken to anonymize sensitive data?
  • Data representativeness -  Does the data avoid bias and represent all relevant groups fairly?

3. Algorithmic Fairness

  • Bias Detection - Are there systematic efforts to detect and mitigate bias in algorithms?
  • Algorithmic Transparency - Is the algorithmic decision-making process explainable and transparent?
  • Fairness Evaluation - Are there robust methods to continually assess the fairness of the AI's outputs?

4. Accountability

  • Responsibility - Is it clear who is responsible for the AI's decisions and outputs?
  • Auditability - Is the system designed to be auditable by third parties?
  • Redress - Is there a mechanism for affected individuals to challenge or question the AI's decisions?

5. Sustainability

  • Resource efficiency - Is the AI designed to use computational and energy resources efficiently?

6. Robustness and Safety

  • Error handling: Does the AI have mechanisms to handle errors safely?
  • Safety protocols: Are there protocols to ensure the AI does not cause harm in case of malfunction?

7. Transparency and communication

  • Documentation - Is there clear documentation on how the AI functions and how data is used?
  • Stakeholder communication - Are the project goals, processes, and outcomes communicated transparently to all stakeholders?

8. Impact assessment

  • Societal impact - Was a thorough assessment of the AI’s societal impact conducted?
  • Risks Assessments - What are the risks involved with the use of AI in your project?
  • Pros and Cons of the AI - What are the benefits and disadvantages of exploiting AI in your project?
  • Fairness considerations - Is your AI fair and free from biases? Is it inclusive?
  • Unsupported usecases - What uses cases are not supported by the AI?
  • Long-Term Effects - Were the potential long-term effects on society and environment considered?

Note well that tools such as Microsoft Impact Assessment template and Unesco's Ethical Impact Assessment tool can be applicable in this phase.

9. Legal Compliance

  • Regulatory Adherence - Does the AI comply with applicable laws, regulations, and standards?   E.g., (CCPA) (USA), GDPR (Europe), Data Protection Act of 2019 (Kenya) etc
  • Ethical Standards - Does the project adhere to established ethical standards and guidelines in AI?

10. Continuous Improvement

  • Feedback Mechanisms - Are there processes for incorporating feedback and learning from the AI's performance over time?

By addressing these questions, both educators and students/researchers can gain deeper insights into how the existing responsible AI frameworks and data governance policies were utilized to ensure the responsible use of AI systems for societal benefit. This foundational ruberic potentially offers significant advantages for teachers, enhancing their insight into the application of responsible AI principles.

In summary, this article has expanded upon workshop themes to delve deeper into the nuances of teaching and evaluating responsible AI projects in higher education. Key areas explored include defining responsible computing and responsible AI, understanding guiding principles for AI projects in higher education, and delineating the educator's role in guiding these endeavors. Furthermore, developing a rubric for evaluating responsible AI projects emerges as a foundational approach, encapsulating the necessity of structured, ethical, and transparent evaluation methods. These discussions serve as a crucial step towards embedding a culture of responsibility in AI development and usage within academic institutions, ultimately shaping a future where technology aligns with ethical and societal values.

Written by:

Kadian Davis-Owusu

Kadian has a background in Computer Science and pursued her PhD and post-doctoral studies in the fields of Design for Social Interaction and Design for Health. She has taught a number of interaction design courses at the university level including the University of the West Indies, the University of the Commonwealth Caribbean (UCC) in Jamaica, and the Delft University of Technology in The Netherlands. Kadian also serves as the Founder and Lead UX Designer for TeachSomebody and is the host of ...