Guidance for Generative AI in Education and Research: Best Practices and Considerations

Author

Reads 639

An artist’s illustration of artificial intelligence (AI). This illustration depicts language models which generate text. It was created by Wes Cockx as part of the Visualising AI project l...
Credit: pexels.com, An artist’s illustration of artificial intelligence (AI). This illustration depicts language models which generate text. It was created by Wes Cockx as part of the Visualising AI project l...

Generative AI is transforming education and research, but it requires careful guidance to ensure its benefits are harnessed effectively.

To avoid bias, it's essential to use diverse and representative datasets when training generative AI models, as discussed in the article section on "Data Quality and Bias." This includes using datasets that reflect the diversity of the real world, with a focus on inclusivity and representation.

The quality of the data used to train generative AI models has a direct impact on their performance and reliability. Poor data quality can lead to biased or inaccurate results, which can have serious consequences in education and research settings.

To ensure the responsible use of generative AI in education and research, it's crucial to prioritize transparency and accountability. This includes providing clear information about the data used to train models, as well as the potential limitations and biases of the results.

Getting Started

Generative AI can be a powerful tool in education and research, but it can also be overwhelming to get started. Start by identifying your goals and objectives, whether it's improving student outcomes or accelerating research discoveries.

Curious to learn more? Check out: Generative Ai for Research

Credit: youtube.com, Introduction to Generative AI

According to our previous discussion, generative AI can be applied to various areas, including language, image, and music generation. Research has shown that language models can be used to generate educational content, such as personalized learning materials and adaptive assessments.

To begin with, you'll need to choose a suitable generative AI model that aligns with your goals. For instance, the article highlights the capabilities of transformers in language generation, which can be leveraged for educational purposes.

Generative AI can also be used to automate tasks, freeing up time for more critical thinking and analysis. By automating routine tasks, researchers can focus on higher-level tasks, such as designing experiments and interpreting results.

As you start exploring generative AI, keep in mind the importance of data quality and diversity. A well-designed dataset can significantly impact the performance and reliability of the model, as discussed in the article section on "Data Preparation".

If this caught your attention, see: Generative Ai with Large Language Models

Using Generative AI with Students

Credit: youtube.com, UNESCO Guidance for generative AI in education and research

Using generative AI with students requires a thoughtful and nuanced approach. Have an open conversation with your students about what ChatGPT is and how generative AI is being used in education generally.

Consider giving some examples of how generative AI is being used in education to spark a discussion about its potential perils and possibilities. This can include discussing a report such as ChatGPT: Educational friend or foe?, which explores the debates around ChatGPT in the higher education context.

It's essential to foreground your permission to use generative AI in a conversation about citational practice and the ethics of transparency in research methods. This can help students understand the importance of acknowledging the use of AI tools in their academic work.

Depending on the learning objectives for your course, you might consider discussing a report or article that highlights the potential uses and misuses of generative AI in education. This can lead to a more focused discussion about academic production and academic honesty in your course.

Credit: youtube.com, Should we let students use ChatGPT? | Natasha Berg | TEDxSioux Falls

Students' motivations for using generative AI can be diverse, ranging from stress about the writing and research process to experimentation and curiosity about using AI. Invite your students to have an honest discussion about these and related questions.

To cultivate an environment in your course where students feel comfortable approaching you for support, consider including a question about generative AI on your introductory course survey. You could also include a discussion of generative AI in your community agreements to highlight its impact on the classroom community.

A table below summarizes some of the key considerations for using generative AI with students:

By considering these key factors, you can create a supportive and inclusive environment for your students to explore the potential of generative AI in education.

Academic Integrity and Ethics

Academic honesty is crucial for a positive and respectful learning environment. It's essential to define and discuss expectations with students to ensure everyone is on the same page.

Credit: youtube.com, Generative AI and Academic Integrity at Texas A&M University

Students generally express interest in understanding the purpose and process of learning, and discussing academic honesty can be motivating and demonstrate respect for their agency in the learning process.

To promote academic integrity, consider reviewing the College Honor Code with students and explaining concerns about ChatGPT and other related GPT models. Be open and direct about your expectations for academic honesty, and consider adding a statement to your syllabus clarifying the extent of generative AI use in your course.

Here are some key points to consider:

  • Academic misconduct, including the use of essay mills, plagiarism, and collusion, is strictly prohibited.
  • The use of GenAI tools to write entire assessments and present them as one's own work is considered plagiarism.
  • Using GenAI tools to check grammar or spelling is usually acceptable, but the context and nature of the assessment must be considered.
  • Staff resources on guiding students on the use of GenAI in assessments can be found on the Generative AI Hub.

Academic Integrity

Academic Integrity is a crucial aspect of higher education, and it's essential to discuss it openly with your students.

Academic misconduct is strictly prohibited, including the use of essay mills, homework help sites, plagiarism, collusion, falsification, impersonation or any other action which might give me an unfair advantage.

Students should understand that using generative AI tools to write entire assessments and presenting them as their own work is not acceptable. This is considered a form of plagiarism, as it involves using other human authors' ideas without referencing them.

Take a look at this: Is Generative Ai Plagiarism

Credit: youtube.com, #14 Ethics and Academic Integrity

You should acknowledge GenAI use where it has been used to assist in the process of creating your work. The Library Skills pages share guidance on how and when to acknowledge GenAI in your work.

If you choose to incorporate generative AI in your course, consider foregrounding your permission to use generative AI in a conversation about citational practice and the ethics of transparency in research methods. Given that these products are new and conventions are still forming, you could use this opportunity to think collaboratively, critically, and creatively with your students about what scholarship and research will mean in the context of generative AI.

Detection software is not necessarily a reliable means of checking for use of generative AI. For example, the commonly used program Turnitin has an AI detection tool, but it has serious and acknowledged concerns about bias and reliability.

Here are some key points to consider when discussing academic integrity with your students:

• Academic misconduct is strictly prohibited and includes the use of essay mills, homework help sites, plagiarism, collusion, falsification, impersonation or any other action which might give a student an unfair advantage.

Credit: youtube.com, Ethics and Academic Integrity

• Using generative AI tools to write entire assessments and presenting them as their own work is not acceptable and is considered a form of plagiarism.

• Students should acknowledge GenAI use where it has been used to assist in the process of creating their work.

• Detection software is not a reliable means of checking for use of generative AI, and its limitations should be discussed with students.

Be Aware of Privacy Issues

ChatGPT has been known to share account holders' personal information with third parties, as stated in their privacy policy. This raises serious concerns for students, particularly those from marginalized backgrounds.

Numerous articles have exposed how ChatGPT replicates racist biases, such as the article "ChatGPT proves that AI still has a racism problem" and "The Internet's New Favorite AI Proposes Torturing Iranians and Surveilling Mosques".

Instructors should be aware that this potential sharing of personal information may impact students from marginalized backgrounds. OpenAI's latest settings for sharing chat history can provide some relief for those concerned about privacy.

Credit: youtube.com, Ethical Issues in Higher Education - Navigating Academic Integrity and Inclusion (6 Minutes)

Critical research, self-reflection, and consideration of course goals and expectations are necessary before including Generative AI in the classroom. This ensures that all students, especially those from marginalized backgrounds, are supported.

To better understand the potential risks, let's take a look at some key points to consider:

  • ChatGPT's potential sharing of personal information with third parties.
  • The replication and reinforcement of racial, gender, and other biases.
  • Generative AI's privacy and misinformation concerns.

Hallucination

Hallucination is a serious concern in academic integrity and ethics. It refers to instances where AI generates false or misleading information.

To mitigate this risk, verification protocols are crucial. This involves cross-verifying AI-generated data with trusted sources before use. It's essential to question the accuracy of AI outputs, especially in well-trained models.

Educating users on the signs of AI hallucination is vital. This empowers them to identify and question implausible outputs. Awareness and training can help prevent the spread of misinformation.

High-stakes scenarios require human oversight. Avoid relying solely on AI for decision-making in critical tasks where misinformation could lead to significant consequences. Iterative review processes that involve both AI outputs and human oversight ensure accuracy and reliability.

Here are some strategies to prevent AI hallucination:

  • Verification Protocol: Cross-verify AI-generated data with trusted sources.
  • Awareness and Training: Educate users on the signs of AI hallucination.
  • Limit Use for Critical Tasks: Avoid relying solely on AI for decision-making in high-stakes scenarios.
  • Iterative Review: Implement a multi-stage review process with human oversight.

Tools and Resources

Credit: youtube.com, How AI Could Save (Not Destroy) Education | Sal Khan | TED

Exploring faculty-facing resources on generative AI can be a great place to start.

The University of Delaware Center for Teaching & Assessment of Learning has developed "Discipline-specific Generative AI Teaching and Learning Resources".

Harvard University offers a wealth of resources through "Teach with Generative AI", including the "Harvard GenAI Library for Teaching and Learning".

The System Prompt Library is a valuable tool within "Teach with Generative AI", providing a range of effective prompts for educators.

These resources are particularly useful for learning about the technology and its implications for academic integrity, assignment design, student engagement, and the future of AI.

Here are some specific resources to get you started:

  • "Discipline-specific Generative AI Teaching and Learning Resources" developed by University of Delaware Center for Teaching & Assessment of Learning.
  • "Teach with Generative AI" from Harvard University, including the "Harvard GenAI Library for Teaching and Learning" and the System Prompt Library.

Guidelines and Policies

You should explicitly state how you want students to cite their use of generative AI technology in their assignments or assessments. The MLA and APA have published guidance on citing generative AI.

It's essential to articulate your requested citation format or style in writing so students understand what's expected of them. This will help prevent plagiarism and ensure academic integrity.

Credit: youtube.com, Guide for University AI Policies, Principles, & Guidelines

To address generative AI in your syllabus, you can include a statement outlining your approach or policy. This will help students understand what's allowed and what's not.

A syllabus statement can also help you reflect on your course design and clarify your pedagogical goals.

Instructors may allow students to engage with AI in specified ways or forbid its use altogether. The key is to be clear and transparent about your expectations.

You can use the following guidelines to create your own syllabus statement:

  • Allow students to use spell check, grammar check, and synonym identification tools.
  • Permit students to use app recommendations for rephrasing sentences or reorganizing paragraphs they've drafted themselves.
  • Require students to acknowledge the use of generative AI in their work.

Remember to provide a clear definition of what constitutes "inappropriate use" of generative AI and the consequences for violating this policy.

Evidence of inappropriate AI use may lead to an Academic Integrity report, with sanctions ranging from a zero on the assignment to an F for the course.

Be aware that other classes may have different policies, and some may forbid AI use altogether.

Incorporating generative AI into your course can be a valuable learning experience for students, helping them develop their resilience to automation and focus on skills that will remain relevant despite the rise of automation.

Landon Fanetti

Writer

Landon Fanetti is a prolific author with many years of experience writing blog posts. He has a keen interest in technology, finance, and politics, which are reflected in his writings. Landon's unique perspective on current events and his ability to communicate complex ideas in a simple manner make him a favorite among readers.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.