Karin Schmidlin

  • Home
  • About
  • Teaching
  • Writing
  • Projects

From Data to Insight

From Data to Insights: Exploring Methods of Assessment in Online Project-Based Learning (PjBL)

This paper is the third of three papers for my comprehensive PhD exam, March 2024

Abstract

This paper examines assessment methodologies in project-based learning (PjBL) in post-secondary design education, with a specific focus on collaborative teamwork environments. It explores the vital role of universities in fostering key 21st-century capabilities, including critical thinking, problem-solving, and interdisciplinary collaboration. The paper examines the transition from traditional assessments to more comprehensive and inclusive approaches, emphasizing the significance of tailoring learning experiences to individual needs. It also explores the potential of using advanced technological and data-driven methods, such as learning analytics and artificial intelligence, to enhance assessment methods in collaborative project-based learning. Additionally, I examine ethical considerations related to these methods.

Keywords: Project-based learning (PjBL), collaborative learning, learning analytics (LA), artificial intelligence in education (AiED), teamwork assessment, ethics

1. Introduction

2. Literature Review

3. Project-Based Learning - A Primer

4. PjBL & Assessment

5. Analytical Approaches for Assessment

6. Ethical Considerations

7. Conclusion & Future Directions

References

1. Introduction

Effective teamwork abilities have become increasingly essential for students to learn because they reflect the interconnected and dynamic character of the professional environment (Brosens et al., 2023; Cortázar et al., 2022; Vartiainen et al., 2022). More importantly, many cognitive tasks are increasingly relegated to computer programs and algorithms, while people are expected to take on more complex tasks requiring critical thinking and collaboration with others (Levy & Murnane, 2004). Collaboration with others has the potential to integrate diverse perspectives, therefore encouraging problem-solving through shared knowledge and enabling blending complementary skills. Within various professional domains, projects are often complex and require the integration of multiple areas of expertise.

Thus, students skilled at collaborating in multidisciplinary teams tend to be more well-prepared for their future careers (Wickliff, 1997; see also Rohm et al., 2021). This competency is particularly relevant in environments that value innovation and creativity, where collaborative efforts often lead to ground-breaking solutions and advancements. While many of the competencies required in the future are unknown, teamwork and collaboration have been noted as two of the most sought-after skills that employers look for in graduates (Britton et al., 2017; Cortázar et al., 2022; Planas-Lladó et al., 2021; Strom & Strom, 2011). Collaborative competencies have also been linked to overall student success, with studies showing a 17% greater achievement rate than individual work (Llanos Mosquera et al., 2021). By cultivating teamwork skills, universities are enhancing individual student competencies and cultivating a flexible, collaborative workforce capable of addressing the complex challenges of the modern world.

To effectively teach collaborative competencies, PjBL has emerged as a vital method to encourage teamwork (Jalinus et al., 2020). However, assessing students in collaborative PjBL environments is intrinsically challenging, especially when considering measurements not exclusive to a single domain and including a diverse group of students (Koh et al., 2018).

In online education, particularly in large courses, the assessment process plays a crucial role in the engagement between the instructor and the students, often serving as the sole point of interaction. This dialogue is not only transactional; it establishes the basis of a learning environment defined by engagement, care, and growth. Nevertheless, in a large online course, a considerable number of students might pose difficulties for a single instructor to manage and uphold the standard of assessment. This is intensified in a PjBL setting where the traditional transmission view of feedback and assessment (Nicol & Macfarlane-Dick, 2006) substantially increases the instructor's workload. This is because the instructor must allocate time and effort to assess and offer relevant feedback for both individual contributions and the overall output of the team to meet the requirement for individualized input. Depending on the project topic, instructors must guide students to help them navigate complex tasks (Heo et al., 2010). This can be rather time-consuming, mainly when teams are at varying stages of comprehension.

Furthermore, consolidating the comments of several reviewers in a sizable class can create additional work for the instructor (Darvishi et al., 2022). Several online tools [1] are available to assist with peer assessment. However, these technologies require instructors to learn extra software that may not necessarily prioritize sound user interface principles.While the role of human insight in the assessment process cannot be overstated, as it cultivates a feeling of connection and personalization essential for successful learning, it is worthwhile to explore technological approaches, such as learning analytics and artificial intelligence (AI), to augment the instructor's ability to provide timely and meaningful feedback and formative assessment along the learning journey of individual students and student teams.

 

[1] Pear Assessment - https://www.peardeck.com/products/pear-assessment, iPeer - https://ipeer.ctlt.ubc.ca/

2. Literature Review

This literature review explores the key components of PjBL; including its core principles, traditional assessments, the integration of advanced technological tools for assessment, and their ethical concerns. PjBL represents a fundamental shift from conventional lecture-based teaching to a model in which students take charge of their learning process (Parra Pennefather, 2022), focusing on critical thinking, problem-solving, and collaboration. Blumenfeld et al. (1991) and Kurt and Akoglu (2023) argue that PjBL prioritizes practical problem-solving and interdisciplinary teamwork, reflecting the complexity of the professional world. This pedagogy, as discussed by scholars such as Bell (2010) and Rohm et al. (2021), involves students participating in complex projects that require input from several fields of study, therefore fostering an integrated skill set.

The collaborative nature of project work requires a nuanced approach to assessment, as Chu et al. (2017) and Fiorini et al. (2022) emphasize. Traditional assessment techniques frequently face challenges in accurately capturing both the creative solutions produced by the teams and the process involved in the production. Peer assessment, although improving student accountability (Kao, 2013) and increasing the variety of input (Cho & MacArthur, 2011; Patchan & Schunn, 2015), may be flawed because of students' limited skills to assess their peers (Cevik, 2015; Gurbanov, 2016). Moreover, Rotsaert et al. (2018) point out that the interpersonal relationships between students can impact how they assess each other and how each student views the value of peer assessment overall.

The introduction of analytical and data-driven technologies, such as learning analytics (LA) and artificial intelligence (AI), holds great promise for enhancing assessment practices in PjBL. As Bulut et al. (2023) and Gašević et al. (2015, 2016, 2022) discuss, these technologies and tools foster a more customized learning experience and may offer a comprehensive understanding of student performance. In addition, multimodal learning analytics (MMLA) can add a new dimension to assessing collaboration in online settings by measuring various types of student interactions in controlled scenarios, as investigated by Järvelä et al. (2023) and Giannakos et al. (2022). However, integrating big data, learning analytics, and artificial intelligence in education presents significant ethical concerns. Issues of data privacy, student consent, algorithmic bias, and ongoing surveillance are urgent, as highlighted by Gomez et al. (2022) and Ruipérez-Valiente et al. (2022). Therefore, it is vital to develop ethical principles and theoretical guidelines to ensure the appropriate and fair use of AI and LA in education (Hwang et al., 2020; Wise & Shaffer, 2015).

Several case studies demonstrate the practical use and inherent complexity of technologically supported assessment (Conde et al., 2016; Khosravi et al., 2022; Knight et al., 2020; Martinez-Maldonado et al., 2019). The works discussed in this literature review emphasize the significance of maintaining a balance between technological progress and the essential human element, emphasizing the intricate nature and potential of integrating technology into the assessment process fairly and transparently.

Back to top

3. Project-Based Learning – A Primer

Project-Based Learning (PjBL), considered an alternative to lecture-based and teacher-led instruction (C.-H. Chen & Yang, 2019), focuses on students engaging in complex real-world projects that can result in a final artifact, report, or team presentation. This pedagogy empowers students to assume responsibility for their learning and cultivate essential 21st-century skills such as critical thinking, problem-solving, and teamwork (Kurt & Akoglu, 2023). PjBL operates on the core principle that students are the main catalysts of their learning journey and not simply receivers of information. By actively participating in the process of inquiry, such as selecting a project topic that is personally relevant or exploring potential solutions to a problem (Frank et al., 2003), students develop a sense of ownership, autonomy, and a deeper connection with the subject matter. Blumenfeld et al. (1991) define PjBL as having two main components: a problem that can be solved and concrete deliverables resulting from the project.

The projects, central to PjBL, are focused on real-world problems that mirror the complexities and interdisciplinary nature of the professional world and society that the students will likely encounter after graduation (Bell, 2010; Hughes & Jones, 2011; Parra Pennefather, 2022; Rohm et al., 2021). These projects are not simply assignments but rather complex elements of work centred on real-life problems (Hadyaoui & Cheniti-Belcadhi, 2023). PjBL bridges the gap between theoretical knowledge and practical application and promotes a broader perspective in problem-solving, encouraging students to collaborate across topic areas and disciplines. Interdisciplinary collaboration is a cornerstone of PjBL (Hmelo-Silver et al., 2008; Jones et al., 1997), reflecting the interconnectedness and demands of the modern workforce and society. Teamwork skills and equipping students with the capabilities to work and engage in interpersonal relationships with people across diverse backgrounds are critical 21st-century skills (Ahonen & Kankaanranta, 2015; Bell, 2010; Kauppi et al., 2020). Meta-analyses of computer-supported collaborative learning (CSCL) have demonstrated that students who engage in online and face-to-face teamwork exhibit better academic performance than those who study independently and show higher scores on knowledge assessments and improved problem-solving abilities (J. Chen et al., 2018; Raes et al., 2015; Slof et al., 2021).

Moreover, PjBL emphasizes the learning process by placing great value on the student's learning journey, including inquiry, application, and reflection. Helle et al. (2006) categorize PjBL into two distinct aspects. One side involves vertical learning, which focuses on acquiring subject-matter knowledge, while the other involves horizontal learning, which encompasses cognitive abilities such as project management. Binkley et al. (2012) developed a similar framework for categorizing skills in their study, dividing 21st-century skills into four groups. The categories encompass cognitive processes such as problem-solving, tools such as collaboration and information literacy, and a fourth category that pertains to a student's understanding of cultural and global issues and their role as citizens. These two studies on categorizing skills into distinct categories might provide valuable insights for creating holistic assessments in PjBL.

Back to top

4. PjBL and Assessment: Principles, Benefits and Challenges

Current Assessment Practices in PjBL

Project-based pedagogy necessitates rethinking assessment methodologies, shifting from primarily assessing outcomes to focusing on formative assessment for the individual learner and the group, emphasizing the learning process. Teamwork, however, can be challenging to quantify since it consists of multiple interconnected behaviours and attitudes (Britton et al., 2017), and it is also difficult to define. For this paper, I define teamwork as individual students working in small groups toward a common objective by creating a prototype or artifact that aims to solve a real-world project challenge. This encompasses the process and individual students' contributions toward the collective effort (Hughes & Jones, 2011).

Assessing Teamwork

Despite the numerous benefits of student collaboration, assessment in PjBL proves challenging. The collaborative nature of the projects requires assessments to strike a balance between recognizing the teams' collective output and the individual student's contributions. Most research on teamwork has primarily concentrated on the group's performance, with individual assessments conducted separately. Therefore, assessing personal skills and competencies related to the group activity has been challenging. The work by Griffin and Care (2015) addresses this issue by showing how to quantify group efforts. Chu et al. (2017) discuss the challenges of acquiring relevant data to effectively identify the contributions of individual students in collaborative projects.

Additionally, cognitive skills such as creativity, critical thinking, and collaboration, which are integral to PjBL, are inherently challenging to measure as they do not lend themselves easily to traditional forms of assessment. This is further exacerbated in an online setting, where the instructor is not privy to the intricacies of the collaborative process and has little control over deteriorating team dynamics (Conrad & Openo, 2018). Therefore, innovative methods that can accurately assess these competencies are needed.

According to Conde et al. (2016), assessment in PjBL is often focused on the final artifact developed by the group, obscuring the learning process, and making it challenging to evaluate each student's contribution and overall performance. Moreover, the tendency of some team members to contribute less effort to a project when working in a group compared to working independently further complicates the issue. Researchers refer to this as "social loafing" (Lin et al., 2021, p. 583) or the free rider syndrome (Haataja et al., 2022; Koh et al., 2018). Students identified unequal contributions in teamwork as a significant factor that negatively impacted their learning experience (Lin et al., 2021; Wilson et al., 2018).

With the move to online delivery for face-to-face (F2F) classes during and since the COVID-19 pandemic, an increase in social loafing has been reported by Wildman et al. (2021). To deal with the issue of social loafing, Roberts and McInnerney (2007) suggest two solutions that tie in with group assessment. The first approach requires the instructor to create specific assignments, such as a team contract, that define the roles and responsibilities of each team member to ensure accountability among students. The second approach includes using "peer pressure openly and unashamedly" (p. 261). While their research holds promise, using the term peer pressure in the context of education is problematic.

Over the years, I have provided equal grades to all team members for the project portion of my project-based course [2]. Particularly in dysfunctional teams, this has consistently appeared unfair, and I have struggled to encourage collaboration while simultaneously recognizing the contributions of individual students. The literature presents conflicting views about the suitable course of action in the case of team grading. Roberts and McInnerney (2007) argue that "assigning group grades without attempting to distinguish between individual members" (p.264) is both unfair and detrimental to the learning process. In contrast, proponents of project-based learning, such as Webb (1995), highlight that the objective is to assess the collective performance of the group and not focus too much on individual contributions.

Measuring Team Competence with Digital Traces. In a case study from Spain, Conde et al. (2016) utilized the Comprehensive Training Model of the Teamwork Competence (CTMTC) tool to measure teamwork competence in a collaborative learning environment. The team analyzed the digital traces generated by individual students and groups as they engaged in the project development. The CTMTC tool is structured around several stages, adapted from the International Project Management Association (IPMA). To facilitate these processes, the researchers collected data from several online platforms, including the course LMS, wikis, forums, and the social media app WhatsApp. Managing individual learning requires ongoing monitoring. In a large class, this can be challenging. To collect all the necessary data, the researchers used LA tools and rubrics to access and analyze data about students' interactions with each other, their engagement in forum discussions, and both individual participation and group dynamics. While this case study is promising in some respects, the students indicated some teamwork-related challenges in a follow-up survey. In particular, the CTMTC tool did not address topics like communication and motivation. Approximately 44% of students reported difficulties with the communication tools, expressing a preference for instant messaging applications over LMS forums for documenting real-time conversations among team members. This mirrors my teaching experience, with students objecting to using the forum in the course LMS and often suggesting alternate options that align with their experience outside the university.

Multitouch Tabletop for Concept Mapping. While the following example by Martinez-Maldonado et al. (2019) predominantly concerns co-located teams, some parallels can be drawn to an online setting. Participants were involved in a task that combined individual and collaborative concept mapping using a multitouch tabletop. Initially, each participant developed a concept map draft using a computer. Following this, teams of three collaborated to design a shared concept map. While tracking their touch inputs, the tabletop interface allowed the teams to incorporate their individual maps, add new components, and change the layout. Additionally, students were recorded with video cameras, providing extra data on how group members interacted and contributed to the shared concept map.

This collected data was then used to devise a system that can automatically analyze and make sense of various types of user interactions, including touch (via the multitouch tabletop), speech (communication between the team members), and visual cues (gestures). The study's goal was to recognize and understand the patterns that emerge during the collaboration process. For example, the system might show how often a student speaks, how students interact with each other, how students engage with the concept map on the tabletop, and how these types of interactions might provide a better understanding of the collaborative process. The validity of this study largely depends on how accurately the data collected reflects actual user behaviour and team interactions. There is a strong possibility that the awareness of being observed influences the behaviour of individual students and the group dynamics. The authors raised concerns about the issue of student surveillance, asking what can and should be recorded in educational settings.

Self – and Peer Assessment.

In most project-based courses, a main assessment component asks students to evaluate their learning journey and that of their peers (Alt & Raichel, 2022; Freeman & McKenzie, 2002; Kao, 2013; Phielix et al., 2010; Planas-Lladó et al., 2021) to achieve a more equitable measurement. While this provides students with a reflective practice that promotes deep learning, it also ensures that students are not solely fixated on the final product or artifact but are equally invested in the learning that occurs along the way.

In the context of collaborative project-based learning, Pellegrino, in his seminal work on measuring what matters (2013), highlights the significance of assessment as a tool that should foster learning and, therefore, be carefully designed and integrated into a course rather than being a separate or final stage of the learning process. He advocates for embedding assessment within the learning activities, enabling continuous feedback and adjustments in teaching and learning. Research by Darvishi et al. (2022) shows that including students in peer assessment has several benefits for both the assessors and the assessees. This is supported by findings from a meta-analysis on peer evaluations (Li et al., 2010) that suggest that students providing feedback to their peers show improved understanding of the learning materials. Additional benefits include the development of a stronger sense of accountability among students (Kao, 2013), improved writing skills, and learning to provide constructive feedback (Lundstrom & Baker, 2009). Assessees, on the other hand, have the advantage of receiving prompt and personalized feedback from peers with diverse viewpoints (Cho & MacArthur, 2011; Patchan & Schunn, 2015), which is seen as less authoritative and more conducive to a mutual exchange of ideas and negotiation (Topping, 1998, 2003), which is crucial in effective teamwork. However, peer reviews for assessment purposes in PjBL can be inadequate as students generally lack the necessary skills to evaluate their classmates appropriately (Cevik, 2015; Gurbanov, 2016), resulting in assessments that may not accurately represent individual student's contributions. The outcome has been mixed in my experience with peer assessments in project-based courses. On the one hand, it can strengthen a team's cohesiveness, while in a dysfunctional team, peer feedback, particularly when given anonymously, can exacerbate feelings of distrust and even psychological unsafety. These points are echoed by research by Harris and Brown (2013) and Vanderhoven et al. (2015), as described by Rotsaert et al. (2018).
E-Portfolios

E-portfolios, another widely used method of assessment in PjBL (Frank et al., 2003; Gulbahar & Tinmaz, 2006), and in design education, function as a repository for individual learning experiences. Barrett (2007) defines e-portfolios as a collection of works and reflections demonstrating a student's growth over time. They allow learners to reflect on their experiences, document their skills, and curate a collection of their best work. According to Garthwait and Verrill (2003), the main goal of e-portfolios is "…to keep students focused on learning rather than on individual projects or products – e-portfolios are part of the learning process, not a result of it" (p. 23).

Further challenges arise in assessing collaborative design education, in addition to the assessment of collaboration already mentioned. Design education encourages creativity in perception and expression by allowing, even encouraging, multiple solutions to a given problem or assignment (Arkun Kocadere & Ozgen, 2012; De Sausmarez, 1964). As such, design is an ill-defined discipline that makes assessing students objectively, individually or in teams, challenging. The creative solutions in team projects are as diverse as the students themselves, and the objectivity of assessment criteria and the instructor who does the assessment is challenging (Fiorini et al., 2022). Instructors are therefore tasked with assessing not only the final artifact but also the individual student's learning process – a challenging undertaking that requires a nuanced and time-intensive approach.

In the following sections, I will explore the potential of data-driven assessment methods, including learning analytics (LA) and artificial intelligence (AI), to enhance our understanding of individual student and group performance.

[2] BET 350: Customer Experience Design. This undergraduate course is part of the Minor in Entrepreneurship and is available in the fall, winter, and spring semesters. The average number of students enrolled each semester is 200. University of Waterloo. https://uwaterloo.ca/conrad-school-entrepreneurship-business/bet-courses#BET350

Back to top

5. Analytical and Data-Driven Approaches for Assessment

Back to top

Technologies and tools that offer a specific degree of automation, such as artificial intelligence (AI) and learning analytics (LA), have the potential to significantly enhance the efficiency of the assessment process. Due to their ability to process large student data sets, these technologies allow for highly personalized learning experiences (Ouyang & Jiao, 2021). Artificial intelligence tools can adapt to individual student needs, providing tailored feedback and scaffolding. This versatility benefits project-based learning, where students' learning paths and outcomes vary widely. However, using automation in education is not a new idea and goes as far back as Skinner's Teaching Machine (Watters, 2021), which has a history fraught with challenges and hubris. In the following sections, I will outline how data-driven technologies such as artificial intelligence (AI) and learning analytics (LA) can augment a teacher's effectiveness, what benefits such data-driven forms offer, and what challenges must be addressed.

Content, Social Network and Multimodal Analysis

Content Analysis.

Content analysis is often used in Computer Supported Collaborative Learning (CSCL) environments to analyze group discussions (De Wever et al., 2006). By systematically assessing forum posts or discussion forums, instructors can measure the extent of student comprehension and team dynamics. In a two-year qualitative study by Vogler et al. (2018) that examined the learning process and perceived outcomes of students participating in an interdisciplinary project-based learning (PjBL) activity, undergraduate students from three courses were assigned a project encompassing all three subjects. The students were assigned to teams to function as both the customer and the client. Leveraging content analysis of the students' reflective journals and interactions in focus groups served as an effective means of assessing the learning experience from the student's point of view. The findings from the first year showed how students leveraged soft skills such as communication and collaboration. However, the study also revealed the need to fully adjust the course design to achieve the project's interdisciplinary objective.

Social Network Analysis (SNA)

SNA provides insights into social and collaborative aspects of student interactions. It is one of the most popular methods to analyze interactions in online learning (De Wever et al., 2006). In a study that looked at individual and collective student engagement, Ryu and Lombardi (2015) applied SNA to gauge the relationship between the two.   Many researchers agree that social engagement is essential for genuine educational experiences during project work (Dewey, 1916; Hmelo-Silver et al., 2008). Although project work has great potential, some obstacles occur when implementing this pedagogy in a typical face-to-face classroom environment. However, in an online setting, students have more opportunities to connect with one another beyond temporal and spatial constraints. Therefore, project work may provide better outcomes. It also has the advantage of generating rich digital traces, offering a unique opportunity to analyze and understand the dynamics of team interactions (Järvelä et al., 2023).

In a study that combined content analysis with social network analysis (Heo et al., 2010), the interactions among team members in an online project-based undergraduate course were analyzed. The goal was to identify data collected in a discussion forum to gauge team cohesiveness and the overall quality of the project output. While the study had a small sample size (n=49), some inferences can be drawn from the interactions. The teams with the most links between members, meaning students commenting and replying to team members' posts, showed better overall performance and team cohesiveness than teams with lots of individual posts; in some cases, students commented on their posts. The study's findings noted that the quality of the interactions was far more important than the quantity of interactions among team members in attaining successful team cohesion and overall performance.

Multimodal Analysis (MMA)

MMA, meanwhile, recognizes the various ways in which students interact with the learning content and one another. It evaluates the different forms of communication used by students in PjBL, including text, video, audio, and interactive media. Team presentations are integral to many project-based courses and are part of the summative assessment. However, evaluating multimedia presentations and providing meaningful feedback is challenging and time-consuming in a large course. To address this problem, Ochoa (2022) focused on assessing team presentations by employing multimodal analysis to dissect the presentations and deliver automated feedback to the student teams. The system gathered data from multiple sources, including audio, video, and content from the slide decks, and employed machine learning algorithms to analyze the data and provide feedback on pre-determined aspects of the presentation, such as delivery, content, and teamwork. The study revealed that the system effectively delivered feedback similar to assessments made by human evaluators.

Moreover, the input provided by the system was crucial in enhancing students' presentation abilities. However, the study also found that further research was necessary to improve and enhance the accuracy and reliability of the system. This assessment approach holds promise for my teaching practice. The substantial size of my class (e.g. 36 teams) renders the process of assessment laborious and challenging.

Learning Analytics

Learning analytics (LA) is an interdisciplinary field that has developed in parallel with the rise of digital technology in education. The primary goal of LA is to analyze large datasets collected from the digital traces the students leave behind in digital learning environments to provide insights for student support and learning design. The Society for Learning Analytics Research SoLAR [3] defines LA as "the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs" (Gašević et al., 2017, p. 64). LA plays a crucial role in collecting and interpreting the large amounts of data generated in online collaborative PjBL. It is the foundation for the previously described analyses by tracking interactions, performance metrics, and engagement levels across multiple platforms. The power of learning analytics lies in its ability to collect real-time data on student performance and convert it into practical insights, facilitating individualized feedback and timely interventions by the instructor. However, implementing LA and other data-driven technologies necessitates a certain degree of digital literacy that may be lacking among stakeholders, such as administrators and instructors (Buckingham Shum & Luckin, 2019). This calls for a careful balance between offering training and customizing tools to meet the users' needs. Hmelo-Silver et al. (2008) point to the importance of graphically displaying data in a way that is visually appealing and easy to grasp, allowing teachers and researchers to analyze complicated data sets and glean insights into the relationships within groups more effectively.

Raković et al. (2023) explore the potential of LA to improve both formative and summative assessment in their discussion on future research opportunities. Their study suggests using trace data to enhance various learning activities, therefore broadening the potential of evaluation in educational environments. Additionally, they underscore the need for further research on the relationship between learning processes and outcomes, the reliability of assessments, and biases in LA models. To demonstrate the ability of online formative assessment data to predict student performance, Bulut et al. (2023), emphasize the impact of learning analytics on conventional assessment approaches. Gašević et al. (2016) further highlight the significance of multimodal and trace-based techniques in learning analytics (LA), offering deeper insights into student learning patterns and performance across various learning platforms. Related to my inquiry on assessment for collaborative learning, the work by Koh et al. (2018) provides an exciting framework for team-based learning that highlights the importance of initial team interactions followed by stages of reflection, all facilitated with LA tools. These stages offer critical feedback, improving the learning experience and team dynamics.

In a study on successful team collaboration, Fidalgo-Blanco et al. (2015) applied an LA system to examine student interactions within a teamwork setting. By analyzing forum data and individual academic performance, the study reported a clear correlation between these interactions and the final grade assigned by the instructor. The study conducted by Koh et al. (2018) offers a valuable model for collaborative learning that highlights the significance of assessing early team interactions and stages of reflection, aided by learning analytics tools. These technologies not only have the potential to improve the assessment process but also make a substantial contribution to the overall learning experience and team dynamics.

The work of Macfadyen and Dawson (2012) examines the constraints of only depending on LA for informing strategic planning in higher education. The paper also highlights the ethical concerns regarding privacy and data collection, notably including data on student activity and demographics. According to Wise and Shaffer (2015), LA's usefulness for enhancing assessment depends on its theoretical foundations, emphasizing the importance of a sound theoretical framework to lead analytics and other technological advances in educational contexts. The lack of theoretical orientation in the design and implementation of data-driven LA is widely supported in the literature (Banihashem & Macfadyen, 2021; Gašević et al., 2015, 2017; Knight & Buckingham Shum, 2017; Reimann, 2016). To address this concern, Buckingham Shum & Luckin (2019) point towards pedagogy as needing to be the "north star" (p. 2789) of technological integrations such as LA and artificial intelligence (AI), while Bulut et al. (2023) argue that instead of collecting digital student traces inside the Learning Management System (e.g., learning activity logs, duration of their stay on each page, engagement in online discussions, and performance in quizzes), the data gathered through online formative assessment should serve as the basis for predictive LA models.

Social Learning Analytics (SLA)

SLA is a branch of Learning Analytics (LA) that analyzes the social interactions and collaborative processes in online learning environments. It differentiates itself from the broader scope of LA, which primarily focuses on individual learning processes and predictive modelling. This focus on the social dimensions of learning is well-suited to the requirements of PjBL. Kaliisa et al. (2022) conducted a systematic review of 36 SLA-related studies conducted between 2011 and 2020. The review specifically focused on implementation and pedagogical perspectives. The findings indicate that SLA  is primarily used in formal and exclusively online settings, with social network analysis (SNA) being the most common analytical method. Most studies concentrated on understanding students' learning processes from a social constructivist standpoint. However, there were certain deficiencies, including a lack of teacher involvement in developing SLA tools and an overall lack of integration of multiple analytical approaches.

Multimodal Learning Analytics (MMLA)

Recently, a subfield called multimodal learning analytics (MMLA) has emerged that is moving beyond data collection from virtual learning environments (VLEs) to incorporate data from external sources such as sensors, cameras, and computer vision systems (Giannakos et al., 2022).  Many universities have started using Learning Management Systems (LMS) to gather detailed learning data for instruction. However, as a lecturer at three Canadian universities, I consistently face the problem of limited access to the advanced learning analytics features in the LMS I use. Lastly, the recommendation by Buckingham Shum et al. (2019) for a human-centric approach to Learning Analytics is relevant for my research and design practice. Moreover, it applies to project-based environments where individual and team contributions are diverse and dynamic.

Artificial Intelligence (AI)

Adopting Artificial Intelligence (AI) in educational assessment has created new opportunities, especially in the context of Project-Based Learning (PjBL). The shift from conventional assessment to AI-supported analysis represents a progression towards possibly more effective and comprehensive assessment procedures. Intelligent tutoring systems (ITS) have been the most common AI applications in higher education (Holmes et al., 2019), focusing on well-defined domains such as mathematics, computer science, and physics. These are often referred to as AI's low-hanging fruits since knowledge in these domains is well-structured and, therefore, easier to automate (Holmes et al., 2019; Luckin et al., 2016).

However, integrating Artificial Intelligence (AI) in Project-Based Learning (PjBL) assessment is not without challenges. It is imperative to navigate this integration with foresight and discretion. The limitations of AI, including the nuances of creative interpretation and the potential for algorithmic bias, necessitate a balanced approach. AI can do precise content analysis, but this precision depends on the algorithms' architecture and the data provided to them. When evaluating student projects, for example, AI technologies like Turnitin's Gradescope (https://www.gradescope.com/) can successfully assess the accuracy of responses in structured assignments and provide instant feedback. In design education, however, where assignments are often more subjective and creative, AI's ability to identify and assess is restricted to what it has been programmed to recognize.

Examples of AI and Assessment

A study conducted by Diziol et al. (2010) explored the integration of AI into collaborative learning settings; however, several technical challenges still need to be addressed before it can have widespread influence. In a study by Hadyaoui et al. (2023), AI techniques such as supervised machine learning were utilized to analyze group interactions in online forums and chat rooms within a collaborative project-based environment. The study aimed "to determine the impact of inter-group interactions on project assessment outcomes" (p.25). Although the authors reported some positive results, it is essential to note that the findings are limited due to the small sample size (n=312), which makes broad generalizations challenging.

An example of AI applied to individual assessment is discussed by Khosravi et al. (2022) in a case study on how explainable AI (XAI) can be used to provide formative feedback on students' writing assignments. Although not explicitly related to team assessment, reflective writing is an integral component of most project-based courses, and in a large class, providing feedback on student reflection essays is time-consuming. XAI stands for explainable artificial intelligence and refers to a set of methods and techniques in AI and machine learning that aim to make the decision-making processes transparent and understandable to humans (Conati et al., 2021). It allows users to understand the rationale behind the AI's feedback and trust its recommendations. This is particularly important in domains such as education, where AI system transparency and accountability can substantially impact student's learning experiences and outcomes.

AcaWriter – Formative Feedback with AI.

The AcaWriter (Knight et al., 2020) is an AI tool that provides formative feedback on student writing assignments. The tool's objective is to help students improve their academic writing skills, particularly in the context of scholarly or reflective writing. Writing analytics tools such as AcaWriter use sequence mining, machine learning, and rule-based natural language processing (NLP) to detect syntactic relationships between words and key terms or subjects in the writing. AcaWriter and similar learning analytics tools come with several drawbacks and challenges. Users, especially students, must have a certain degree of AI literacy to fully comprehend the feedback they receive. Some users may struggle to understand the nuances of the feedback. In addition, these tools often have a limited scope regarding the features they cover. They typically concentrate on a limited set of characteristics connected to certain areas of writing, such as rhetorical structures in the case of the AcaWriter. The tools may overlook other equally important aspects, such as grammar and punctuation, which can significantly influence the overall quality of the writing. AcaWriter and comparable technologies may not consistently deliver perfect feedback. External influences can occasionally interfere with text processing, resulting in poor feedback.

With these limitations in mind, I cautiously approach the prospect of relying on such an automated tool in my courses. The course reflection paper that my students complete near the semester's end provides a rare opportunity for direct interaction with each of them. Despite the significant time commitment required for grading 200 essays, my feedback is a critical component of the course and establishes an essential connection, which many students have described as important. This underscores the irreplaceable value of the human touch in the technological tools we integrate into our courses.

[3] http://solaresearch.org

6. Ethical Considerations

With the proliferation of learning analytics (LA), educational data mining (EDM) and artificial intelligence in education (AiED), large amounts of data about students' learning behaviours, preferences, and overall performance are constantly being collected (Gomez et al., 2022), typically without the explicit consent of the students. This raises profound ethical concerns. Ruipérez-Valiente et al. (2022) note that analyzing trace data alone might yield limited inferences on learning since recorded data do not provide a comprehensive picture of the entire learning process, often neglecting contextual information. Trace data in an educational context refers to students' digital actions and interactions within a learning platform or environment. These could include timestamps of activities, navigation paths, forum posts, assignment submissions, library visits, or performance in quizzes. This raises concerns about privacy violations, especially when sensitive information is involved. So, it is crucial that students are fully aware of the data collected about them, the nature of its use, and the option to withdraw at any time without penalty.

Moreover, for-profit companies with opaque data privacy policies privately own many of the of the technologies and platforms we use in Canadian universities. Without strict regulations and guidelines, there is a potential danger that student data might be misused and utilized for purposes other than intended, such as targeted advertising or profiling, resulting in discrimination or unfair treatment. Decisions, including assessments, based on poorly analyzed data or algorithms with inherent biases can adversely affect students' educational experiences and opportunities.

Lastly, the constant surveillance of students in technology-rich learning environments is hugely problematic. It also leaves out the rich contextual information that only human observation and interaction can provide. Life events are impacting a student's learning journey. Just during the fall 2023 semester, some of my students lost close family members, became sick, had their hearts broken, or dealt with severe mental health concerns. The data collected via the LMS did not reflect any of these events. This is why the role of a human teacher is so relevant, perhaps more than ever. As educational institutions increasingly rely on data-driven approaches, it is imperative to consider how this trend will shape educational practices and the student experience over time. Balancing the benefits of big data in enhancing educational outcomes on one side with the ethical implications of its collection and use on the other side remains a complex and ongoing challenge.

Wise and Shaffer (2015) strongly argue for the need to incorporate a theoretical framework into data collection in an educational setting as the volume of data increases. This is essential, especially when utilizing learning analytics and AI. Selecting relevant variables and interpreting the findings can be compromised without a theoretical foundation.

Adding a theoretical framework guarantees that the process of gathering and examining data is purposeful and in line with educational objectives, rather than being driven solely by the capabilities of technology. This alignment helps generate more meaningful and contextually relevant data regarding student learning. In addition, a theoretical foundation ensures that the conclusions drawn are not just data-driven but are also pedagogically sound and beneficial for educational outcomes. This approach leads to more responsible and ethical use of student data, safeguarding against biases and misinterpretations that could negatively impact students' academic experiences. Instead of using the term' data mining', Wise and Sheffer propose' data geology' or 'data archeology' as appropriate metaphors for analyzing large masses of data in educational settings to include the "underlying conceptual relationships and the situational context" (p.6).

Back to top

7. Conclusions & Future Directions

This paper has examined the evolving assessment landscape in PjBL, specifically in online collaborative settings. This investigation aims to acknowledge the importance of cultivating teamwork skills and collaborative competencies in students while embracing the individual student's learning journey. The paper supports the shift from traditional lecture-based instruction to more student-centred, interactive, and project-based approaches.

However, assessing teamwork skills and competencies presents unique challenges, particularly in large online courses where the human element is invaluable but difficult to scale. By integrating advanced technologies such as learning analytics and artificial intelligence into the assessment processes in PjBL, a more customized learning experience may be possible, including how we assess individual students along their learning journey. However, integrating learning analytics and artificial intelligence in online courses also introduces ethical concerns, such as data privacy, student surveillance, algorithmic bias, and the potential for dehumanizing the learning experience. The paper underscores the need to design fair, transparent assessments that accurately measure individual contributions and group efforts.

Lastly, the personalization of assessment and feedback is where LA and AI's potential is most evident and relevant to my research purpose. Given the increasing size of my classes and the varying rates of progress and skill levels among students, there is a significant opportunity to implement AI-driven feedback mechanisms and leverage real-time data from LA systems (Bulut et al., 2023), to create a personalized learning experience on a large scale.

Back to top

References

Alt, D., & Raichel, N. (2022). Problem-based learning, self- and peer assessment in higher education: Towards advancing lifelong learning skills. Research Papers in Education, 37(3), 370–394. https://doi.org/10.1080/02671522.2020.1849371

Arkun Kocadere, S., & Ozgen, D. (2012). Assessment of basic design course in terms of constructivist learning theory. Procedia - Social and Behavioral Sciences, 51, 115–119. https://doi.org/10.1016/j.sbspro.2012.08.128

Barrett, H. C. (2007). Researching electronic e-portfolios and learner engagement: The reflect initiative. Journal of Adolescent & Adult Literacy, 50(6), 436–449.

Bell, S. (2010). Project-Bbsed learning for the 21st Ccntury: Skills for the future. The Clearing House, 83(2), 39–43.

Blumenfeld, P. C., Soloway, E., Marx, R. W., Krajcik, J. S., Guzdial, M., & Palincsar, A. (1991). Motivating project-based learning: Sustaining the doing, supporting the learning. Educational Psychologist, 26(3/4), 369. https://doi.org/10.1080/00461520.1991.9653139

Britton, E., Simper, N., Leger, A., & Stephenson, J. (2017). Assessing teamwork in undergraduate education: A measurement tool to evaluate individual teamwork skills. Assessment & Evaluation in Higher Education, 42(3), 378–397. https://doi.org/10.1080/02602938.2015.1116497

Buckingham Shum, S., Ferguson, R., & Martinez-Maldonado, R. (2019). Human-centred learning analytics. Journal of Learning Analytics, 6(2), Article 2. https://doi.org/10.18608/jla.2019.62.1
Bulut, O., Gorgun, G., Yildirim-Erbasli, S. N., Wongvorachan, T., Daniels, L. M., Gao, Y., Lai, K. W., & Shin, J. (2023). Standing on the shoulders of giants: Online formative assessments as the foundation for predictive learning analytics models. British Journal of Educational Technology, 54(1), 19–39. https://doi.org/10.1111/bjet.13276

Cevik, Y. D. (2015). Assessor or assessee? Investigating the differential effects of online peer assessment roles in the development of students' problem-solving skills. Computers in Human Behavior, 52, 250–258. https://doi.org/10.1016/j.chb.2015.05.056

Cho, K., & MacArthur, C. (2011). Learning by reviewing. Journal of Educational Psychology, 103(1), 73–84. https://doi.org/10.1037/a0021950

Chu, S. K. W., Reynolds, R. B., Tavares, N. J., Notari, M., & Lee, C. W. Y. (2017). Assessment instruments for twenty-first century skills. In S. K. W. Chu, R. B. Reynolds, N. J. Tavares, M. Notari, & C. W. Y. Lee (Eds.), 21st century skills development through inquiry-based learning: From theory to practice (pp. 163–192). Springer. https://doi.org/10.1007/978-981-10-2481-8_8

Conati, C., Barral, O., Putnam, V., & Rieger, L. (2021). Toward personalized XAI: A case study in intelligent tutoring systems. Artificial Intelligence, 298, 103503. https://doi.org/10.1016/j.artint.2021.103503

Conde, M. Á., Hernández-García, Á., García-Peñalvo, F. J., Fidalgo-Blanco, Á., & Sein-Echaluce, M. (2016). Evaluation of the CTMTC methodology for assessment of teamwork competence development and acquisition in higher education. In P. Zaphiris & A. Ioannou (Eds.), Learning and Collaboration Technologies (pp. 201–212). Springer International Publishing. https://doi.org/10.1007/978-3-319-39483-1_19

Conrad, D., & Openo, J. (2018). Assessment strategies for online learning: engagement and authenticity. Athabasca University Press. https://doi.org/10.15215/aupress/9781771992329.01

Cortázar, C., Nussbaum, M., Alario-Hoyos, C., Goñi, J., & Alvares, D. (2022). The impacts of scaffolding socially shared regulation on teamwork in an online project-based course. The Internet and Higher Education, 55, 100877. https://doi.org/10.1016/j.iheduc.2022.100877

Darvishi, A., Khosravi, H., Sadiq, S., & Gašević, D. (2022). Incorporating AI and learning analytics to build trustworthy peer assessment systems. British Journal of Educational Technology, 53(4), 844–875. https://doi.org/10.1111/bjet.13233

De Sausmarez, M. (1964). Basic design: The dynamics of visual form. Reinhold Pub. Corp.

De Wever, B., Schellens, T., Valcke, M., & Van Keer, H. (2006). Content analysis schemes to analyze transcripts of online asynchronous discussion groups: A review. Computers & Education, 46(1), 6–28. https://doi.org/10.1016/j.compedu.2005.04.005

Dewey, J. (1916). Democracy and education. Project Gutenberg; NetLibrary.

Fidalgo-Blanco, Á., Sein-Echaluce, M. L., García-Peñalvo, F. J., & Conde, M. Á. (2015). Using learning analytics to improve teamwork assessment. Computers in Human Behavior, 47(Journal Article), 149–156. https://doi.org/10.1016/j.chb.2014.11.050

Fiorini, V., Orr, S., Stopher, B., Tedeschi, A., & Tsui, C. (2022). Global directions: Unique approaches to design education. In Introduction to Design Education (pp. 93–114). Routledge.

Frank, M., Lavy, I., & Elata, D. (2003). Implementing the project-based learning approach in an academic engineering course. International Journal of Technology and Design Education, 13(3), 273–288. https://doi.org/10.1023/A:1026192113732

Garthwait, A., & Verrill, J. (2003). E-portfolios: Documenting student progress. Science and Children, 40(8), 22–27.

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2016). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicting academic success. The Internet and Higher Education, 28, 68–84. https://doi.org/10.1016/j.iheduc.2015.10.002

Gašević, D., Dawson, S., & Siemens, G. (2015). Let's not forget: Learning analytics are about learning. TechTrends, 59(1), 64–71. https://doi.org/10.1007/s11528-014-0822-x

Gašević, D., Kovanović, V., & Joksimović, S. (2017). Piecing the learning analytics puzzle: A consolidated model of a field of research and practice. Learning: Research and Practice, 3(1), 63–78. https://doi.org/10.1080/23735082.2017.1286142

Giannakos, M., Cukurova, M., & Papavlasopoulou, S. (2022). Sensor-based analytics in education: lessons learned from research in multimodal learning analytics. In M. Giannakos, D. Spikol, D. Di Mitri, K. Sharma, X. Ochoa, & R. Hammad (Eds.), The multimodal learning analytics handbook (pp. 329–358). Springer International Publishing. https://doi.org/10.1007/978-3-031-08076-0_13

Gomez, M. J., Ruipérez-Valiente, J. A., & García Clemente, F. J. (2022). Analyzing trends and patterns across the educational technology communities using Fontana framework. IEEE Access, 10, 35336–35351. https://doi.org/10.1109/ACCESS.2022.3163253

Gulbahar, Y., & Tinmaz, H. (2006). Implementing project-based learning and e-portfolio assessment in an undergraduate course. Journal of Research on Technology in Education, 38(3), 309–327. https://doi.org/10.1080/15391523.2006.10782462

Gurbanov, E. (2016). The challenge of grading in self and peer-assessment (undergraduate students' and university teachers' perspectives). Journal of Education in Black Sea Region, 1(2), Article 2. https://doi.org/10.31578/jebs.v1i2.21

Hadyaoui, A., & Cheniti-Belcadhi, L. (2023). Exploring the effects of gender in skills acquisition in collaborative learning based on the ontological clustering model. Journal of Computer Assisted Learning, n/a(n/a). https://doi.org/10.1111/jcal.12852

Hadyaoui, A., & Cheniti-Belcadhi, L. (2023). Ontology-based group assessment analytics framework for performances prediction in project-based collaborative learning. Smart Learning Environments, 10(1), 43. https://doi.org/10.1186/s40561-023-00262-w

Heo, H., Lim, K. Y., & Kim, Y. (2010). Exploratory study on the patterns of online interaction and knowledge co-construction in project-based learning. Computers & Education, 55(3), 1383–1392. https://doi.org/10.1016/j.compedu.2010.06.012

Hmelo-Silver, C. E., Chernobilsky, E., & Jordan, R. (2008). Understanding collaborative learning processes in new learning environments. Instructional Science, 36(5/6), 409–430.

Hughes, R. L., & Jones, S. K. (2011). Developing and assessing college student teamwork skills. New Directions for Institutional Research, 2011(149), 53–64. https://doi.org/10.1002/ir.380

Hwang, G.-J., Xie, H., Wah, B. W., & Gašević, D. (2020). Vision, challenges, roles and research issues of Artificial Intelligence in Education. Computers and Education: Artificial Intelligence, 1, 100001.

Jalinus, N., Syahril, Nabawi, R. A., & Arbi, Y. (2020). How project-based learning and direct teaching models affect teamwork and welding skills among students. 11(11), 85.

Järvelä, S., Vuorenmaa, E., Çini, A., Malmberg, J., & Järvenoja, H. (2023). How learning process data can inform regulation in collaborative learning practice. In O. Viberg & Å. Grönlund (Eds.), Practicable Learning Analytics (pp. 115–132). Springer International Publishing. https://doi.org/10.1007/978-3-031-27646-0_7

Jones, B. F., Rasmussen, C. M., & Moffitt, M. C. (1997). Real-life problem solving: A collaborative approach to interdisciplinary learning (1st ed). American Psychological Association.

Kaliisa, R., Rienties, B., Mørch, A. I., & Kluge, A. (2022). Social learning analytics in computer-supported collaborative learning environments: A systematic review of empirical studies. Computers and Education Open, 3, 100073. https://doi.org/10.1016/j.caeo.2022.100073

Kao, G. Y.-M. (2013). Enhancing the quality of peer review by reducing student "free riding": Peer assessment with positive interdependence. British Journal of Educational Technology, 44(1), 112–124. https://doi.org/10.1111/j.1467-8535.2011.01278.x

Khosravi, H., Shum, S. B., Chen, G., Conati, C., Tsai, Y.-S., Kay, J., Knight, S., Martinez-Maldonado, R., Sadiq, S., & Gašević, D. (2022). Explainable artificial intelligence in education. Computers and Education: Artificial Intelligence, 3, 100074.

Knight, S., & Buckingham Shum, S. (2017). Theory and learning analytics. In Society for Learning Analytics Research (SoLAR) (pp. 17–22). Society for Learning Analytics Research. https://www.solaresearch.org/publications/hla-17/hla17-chapter1/

Knight, S., Shibani, A., Abel, S., Gibson, A., Ryan, P., Sutton, N., Wight, R., Lucas, C., Sándor, Á., Kitto, K., Liu, M., Mogarkar, R., & Shum, S. (2020). AcaWriter: A learning analytics tool for formative feedback on academic writing. Journal of Writing Research. https://doi.org/10.17239/jowr-2020.12.01.06

Koh, E., Hong, H., & Tan, J. P.-L. (2018). Formatively assessing teamwork in technology-enabled twenty-first century classrooms: Exploratory findings of a teamwork awareness programme in Singapore. Asia Pacific Journal of Education, 38(1), 129–144. https://doi.org/10.1080/02188791.2018.1423952

Kurt, G., & Akoglu, K. (2023). Project-based learning in science education: A comprehensive literature review. Interdisciplinary Journal of Environmental and Science Education, 19(3), e2311. https://doi.org/10.29333/ijese/13677

Levy, F., & Murnane, R. J. (2004). The new division of labour: How computers are creating the next job market. Princeton University Press.

Li, L., Liu, X., & Steckelberg, A. L. (2010). Assessor or assessee: How student learning improves by giving and receiving peer feedback. British Journal of Educational Technology, 41(3), 525–536. https://doi.org/10.1111/j.1467-8535.2009.00968.x

Lin, J.-W., Tsai, C.-W., Hsu, C.-C., & Chang, L.-C. (2021). Peer assessment with group awareness tools and effects on project-based learning. Interactive Learning Environments, 29(4), 583–599. https://doi.org/10.1080/10494820.2019.1593198

Llanos Mosquera, J. M., Hidalgo Suarez, C. G., & Bucheli Guerrero, V. A. (2021). Una revisión sistemática sobre aula invertida y aprendizaje colaborativo apoyados en inteligencia artificial para el aprendizaje de programación. [A systematic review on flipped classroom and collaborative learning supported in artificial intelligence for programming learning]. Tecnura, 25(69), 196–214. https://doi.org/10.14483/22487638.16934

Lundstrom, K., & Baker, W. (2009). To give is better than to receive: The benefits of peer review to the reviewer's own writing. Journal of Second Language Writing, 18(1), 30–43. https://doi.org/10.1016/j.jslw.2008.06.002

Macfadyen, L. P., & Dawson, S. (2012). Numbers are not enough. Why e-Learning analytics failed to inform an institutional strategic plan. Journal of Educational Technology & Society, 15(3), 149–163. https://www.proquest.com/scholarly-journals/numbers-are-not-enough-why-e-learning-analytics/docview/1287024911/se-2?accountid=14656

Martinez-Maldonado, R., Kay, J., Buckingham Shum, S., & Yacef, K. (2019). Collocated collaboration analytics: principles and dilemmas for mining multimodal interaction data. Human–Computer Interaction, 34(1), 1–50. https://doi.org/10.1080/07370024.2017.1338956

Ouyang, F., & Jiao, P. (2021). Artificial intelligence in education: The three paradigms. Computers and Education. Artificial Intelligence, 2(Journal Article), 100020. https://doi.org/10.1016/j.caeai.2021.100020

Patchan, M. M., & Schunn, C. D. (2015). Understanding the benefits of providing peer feedback: How students respond to peers' texts of varying quality. Instructional Science, 43(5), 591–614.

Pellegrino, J. W. (2013). Measuring what matters: Technology and the design of assessments that support learning. In Handbook of Design in Educational Technology. Routledge.

Planas-Lladó, A., Feliu, L., Arbat, G., Pujol, J., Suñol, J. J., Castro, F., & Martí, C. (2021). An analysis of teamwork based on self and peer evaluation in higher education. Assessment & Evaluation in Higher Education, 46(2), 191–207. https://doi.org/10.1080/02602938.2020.1763254

Raković, M., Gašević, D., Hassan, S. U., Ruipérez Valiente, J. A., Aljohani, N., & Milligan, S. (2023). Learning analytics and assessment: Emerging research trends, promises and future opportunities. British Journal of Educational Technology, 54(1), 10–18. https://doi.org/10.1111/bjet.13301

Reimann, P. (2016). Connecting learning analytics with learning research: The role of design-based research. Learning: Research and Practice, 2(2), 130–142. https://doi.org/10.1080/23735082.2016.1210198

Roberts, T. S., & McInnerney, J. M. (2007). Seven problems of online group learning (and their solutions). Journal of Educational Technology & Society, 10(4), 257–268.

Rohm, A. J., Stefl, M., & Ward, N. (2021). Future proof and real-world ready: The role of live project-based learning in students' skill development. Journal of Marketing Education, 43(2), 204–215. https://doi.org/10.1177/02734753211001409

Ruipérez-Valiente, J. A., Martínez-Maldonado, R., Di Mitri, D., & Schneider, J. (2022). From sensor data to educational insights. Sensors (Basel, Switzerland), 22(21), 8556. https://doi.org/10.3390/s22218556

Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249–276. https://doi.org/10.3102/00346543068003249

Topping, K. (2003). Self and peer assessment in school and university: Reliability, validity and utility. In M. Segers, F. Dochy, & E. Cascallar (Eds.), Optimising new modes of assessment: In search of qualities and standards (pp. 55–87). Springer Netherlands. https://doi.org/10.1007/0-306-48125-1_4

Vogler, J. S., Thompson, P., Davis, D. W., Mayfield, B. E., Finley, P. M., & Yasseri, D. (2018). The hard work of soft skills: Augmenting the project-based learning experience with interdisciplinary teamwork. Instructional Science, 46(3), 457–488.

Watters, A. (2021). B. F. Skinner builds a teaching machine. In Teaching Machines (pp. 19–33). M.I.T. Press, Massachusetts Institute of Technology. https://direct.mit.edu/books/book/5138/chapter/3395412/B-F-Skinner-Builds-a-Teaching-Machine

Webb, N. M. (1995). Group collaboration in assessment: Multiple objectives, processes, and outcomes. Educational Evaluation and Policy Analysis, 17(2), 239–261. https://doi.org/10.3102/01623737017002239

Wildman, J. L., Nguyen, D. M., Duong, N. S., & Warren, C. (2021). Student teamwork during covid-19: Challenges, changes, and consequences. Small Group Research, 52(2), 119–134. https://doi.org/10.1177/1046496420985185

Wilson, K. J., Brickman, P., & Brame, C. J. (2018). Group work. CBE Life Sciences Education, 17(1), fe1. https://doi.org/10.1187/cbe.17-12-0258

Wise, A. F., & Shaffer, D. W. (2015). Why theory matters more than ever in the age of big data. Journal of Learning Analytics, 2(2), 5–13. https://doi.org/10.18608/jla.2015.22.2

Back to top

Karin B. Schmidlin, PhD student

Department of Language and Literacy Education (LLED), Faculty of Education, University of British Columbia. Supervisors: Dr. Leah Macfadyen & Dr. Heather O'Brien Committee Members: Dr. Jillianne Code Dr. Patrick Parra Pennefather