University of Wollongong Research Online Test Series for Scopus Harvesting 2021 1-1-2021 Remote, simulation or traditional engineering teaching laboratory: a systematic literature review of assessment implementations to measure student achievement or learning Sasha Nikolic University of Wollongong,
[email protected]Montserrat Ros University of Wollongong,
[email protected]Kosta Jovanovic University of Belgrade Zarko Stanisavljevic University of Belgrade Follow this and additional works at: https://ro.uow.edu.au/test2021 Recommended Citation Nikolic, Sasha; Ros, Montserrat; Jovanovic, Kosta; and Stanisavljevic, Zarko, "Remote, simulation or traditional engineering teaching laboratory: a systematic literature review of assessment implementations to measure student achievement or learning" (2021). Test Series for Scopus Harvesting 2021. 3088. https://ro.uow.edu.au/test2021/3088 Research Online is the open access institutional repository for the University of Wollongong. For further information contact the UOW Library:
[email protected]Remote, simulation or traditional engineering teaching laboratory: a systematic literature review of assessment implementations to measure student achievement or learning Abstract The laboratory is an integral component of engineering education, resulting in a multitude of studies. Generally, the research focus is on the laboratory innovation, rather than learning itself. It is observed that empirical evidence is strongly built around student perceptions of their learning or experience via survey instruments; and in some cases, complimented with limited quantitative measures including assessments. With the laboratory being a multifaceted, multi-domain learning environment also covering the psychomotor and affective domains, such observations suggest that the empirical data being collected is providing an incomplete analysis. Therefore, this paper undertakes a systematic literature review exploring remote, simulation and traditional laboratory studies that explicitly include assessment analysis. Explored are the types of assessments used and assessment innovations. The study 1) confirms that assessments concentrate on the cognitive domain, underselling the learning being achieved; 2) Student survey instruments play an important role in measuring laboratory success; 3) Background information of the learning objectives are not clearly stated and/or not clearly linked to the associated assessment; 4) There are several research opportunities available to improve understanding of laboratory assessments. A roadmap and recommendations to overcome these weaknesses is outlined, providing a platform for future researchers to incorporate in their studies. Keywords Blooms taxonomy, laboratory learning assessment, learning objectives, systematic review This journal article is available at Research Online: https://ro.uow.edu.au/test2021/3088 1 Remote, Simulation or Traditional Engineering Teaching Laboratory: A Systematic Literature Review of Assessment Implementations to Measure Student Achievement or Learning Sasha Nikolic Montserrat Ros Kosta Jovanovic Zarko Stanisavljevic University of Wollongong University of Wollongong University of Belgrade University of Belgrade I. ABSTRACT The laboratory is an integral component of engineering education, resulting in hundreds of studies exploring how to design laboratories of best fit and best practice. In many cases the research focus is on the laboratory innovation, rather than learning itself. It is observed that empirical evidence is strongly built around student perceptions of their learning or experience via survey instruments; and in some cases, complimented with quantitative measures such as pre and post-tests (learning) and marks or grades (achievement) that focus on cognitive learning. With the laboratory being a multifaceted, multi- domain learning environment also covering the psychomotor and affective domains, such observations suggest that the empirical data being collected is providing an incomplete analysis. Therefore, this paper explores engineering education literature with a focus targeted at laboratory studies that explicitly include assessment analysis to determine any gaps in research approaches to date. This will provide an opportunity for laboratory-based research to realign and cover any discovered gaps in relation to learning. Unlike other similar studies this paper determines findings based on the outcomes of real, non- survey-based assessments alone. To develop this understanding four research questions are explored: 1. What assessments are used to measure the learning benefits of new laboratory implementations? 2. What assessments are used to measure the learning benefits when comparing laboratory modes? 3. What are the new laboratory assessment innovations? 4. Is it currently possible to holistically assess all laboratory learning objectives across the cognitive, psychomotor, and affective domains? Following a formal protocol, a database search is conducted with timeframe 2000 to 2020. Thirty-two articles are identified. The study 1) confirms that assessments concentrate on the cognitive domain, underselling the learning being achieved; 2) Student survey instruments play an important role in measuring laboratory success; 3) Background information of the learning objectives are not clearly stated and/or not clearly linked to the associated assessment; 4) There are several research opportunities available to improve understanding of laboratory assessments. A roadmap and recommendations to overcome these weaknesses is outlined, providing a platform for future researchers to incorporate in their studies. Index Terms— Blooms Taxonomy, Laboratory Learning Assessment, Learning Objectives, Systematic Review 2 II. INTRODUCTION There are multiple laboratory modes of teaching, including (with basic description): remote (using real laboratory equipment from a distance using online technologies) (Gustavsson et al., 2009); simulated, also associated with the term virtual (programs that replicate/simulate real equipment) (Balakrishnan and Woods, 2013); and, traditional, also associated with the term face to face (using real equipment in a hands- on laboratory) (Feisel and Rosa, 2005). Regardless of the mode the researchers have always stated that the learning experience provides an important contribution to engineering education. Furthermore, mixing and matching laboratory modes may help advance learning further then one mode can alone (Ma and Nickerson, 2006). Understanding the strengths and weaknesses of each mode is important because different modes can aid or weaken the development of various learning outcomes (Lindsay and Good, 2005). Therefore, it is important to match the mode/s with the desired learning outcome intended. It is equally important to understand the impact on all forms of learning occurring in any laboratory implementation, intentional or not. It is well acknowledged that the laboratory is capable of enhancing student learning across thirteen laboratory objectives (Feisel et al., 2002) that includes instrumentation, models, experimentation, data analysis, design, learning from failure, creativity, psychomotor, safety, communication, teamwork, ethics and sensory awareness. These objectives showcase the multifaceted benefits and diverse learning experiences covering the cognitive, psychomotor and affective learning domains that can be incorporated and assessed through experimentation (Nikolic et al., 2021). Unfortunately, for the few studies that try and measure student achievement or learning, assessment is primarily targeted at the cognitive domain (Brinson, 2015, Post et al., 2019). The focus is generally concentrated on conceptual knowledge and understanding. Psychomotor or affective learning are generally not considered, or if they are, tend to be evaluated via survey instruments for research purposes. This cognitive concentration tends to suggest a learning outcomes focus predetermined by the teacher/course syllabus without appreciating the wider student learning process (Hassan, 2011). Of interest is why greater effort has not been placed on assessing other competencies beyond the cognitive. For example, with engineers increasingly needing to engage in work that are both technical and social in nature (Guzzomi et al., 2017), would the laboratory not be a good place to nurture such competencies and assess them? Arguably, increasing self-confidence in high-risk environments such as with high voltage technics through hands-on activities, could be seen in industry as equally important as cognitive learning for the engineering career (Memik and Nikolic, 2021). This would require assessments that go beyond the cognitive domain. With greater government focus on work integrated learning, career and industry readiness (Nikolic et al., 2016, Australian Government Department of Education and Training, 2017) which involves learning across multiple domains, could and should the laboratory play a larger role? How do we encourage laboratory objective development and assessment beyond the cognitive domain? Teaching laboratory investigations that attempt to discover educational benefits (beyond survey instruments) do so by either comparing changes in student achievement (for example comparing grades or scores), or less frequently comparing changes in learning (for example using a pre and post-test). These approaches can help researchers developing new laboratory implementations or comparing laboratory modes. The educational benefits that can be extracted from such investigations are limited by the assessment structures used. Selecting assessment methods that target the desired learning objectives is important. This is because assessments encourage students to focus on those objectives being assessed at the expense of those which are not, and they influence the students’ learning approach (surface or deep) 3 (Boud, 1995, Guzzomi et al., 2017). Therefore, without careful consideration, assessment choices may undermine potentially holistic learning experiences and encourage poor learning choices. This highlights how teaching, learning and assessment are inextricably linked (Hargreaves, 1997). Therefore, it is of great interest to understand how laboratory assessment has progressed and what learning objectives are being targeted. A review of competency based learning assessment implementations in engineering was undertaken by Henri et al. (2017). They discovered a diverse range of assessment tools and approaches that include: online assessment; surveys and 360o assessments focused on self and peer assessment; portfolios; group writing, assignments and presentations; various formative assessment approaches; practical and hands-on assessment; tests, quizzes, assignments and projects; and, big picture assessments. The variety of assessments outlined provide a wealth of options to evaluate student competencies across a long list of objectives and the three learning domains. It is of great interest to see if the evolution of assessment options used in engineering has transitioned to the laboratory, and to also help evaluate the educational benefits of various laboratory implementations. It is also of interest to explore if there are any laboratory specific assessments innovations being developed. The authors are not the first to raise the many questions outlined throughout the introduction. There is already awareness of the lack of attention paid to the assessment of instructional objectives of engineering laboratories and the limitations put in place by conventional assessments (Loui, 2016). In previous work the authors (Nikolic et al., 2021) developed an instrument to measure perceived learning that holistically considered all laboratory learning objectives across the cognitive, psychomotor and affective domains. The basis of this investigation is to determine a baseline that can be used by the authors to move to the next scaffold of their work to holistically measure real learning across the three domains. To determine this, four research questions are investigated: 1. What assessments are used to measure the learning benefits of new laboratory implementations (NLI)? 2. What assessments are used to measure the learning benefits when comparing laboratory modes (CLM)? 3. What are the new laboratory assessment innovations (LAI)? 4. Is it currently possible to holistically assess all laboratory learning objectives across the cognitive, psychomotor and affective domains? III. COGNITIVE, PSYCHOMOTOR AND AFFECTIVE LEARNING To appreciate the contribution of this investigation, a greater understanding is needed of the types of learning available through experimentation. At the foundation lies the historical context of the laboratory and associated learning objectives outlined in the work by Feisel and Rosa (2005). The thirteen laboratory objectives mentioned in the introduction, outline the diversity of learning opportunity. Students do not need to show competency of the objectives in one course, but collectively over time. Bloom’s Taxonomy considers learning across three overlapping domains: cognitive (reflect students’ knowledge and thinking skills); psychomotor (focus on manual tasks that require the manipulation of objects or apparatus which involves the coordination between the brain and body in performing the tasks) and affective (changes in attitude, beliefs, emotions and feelings) (Anderson et al., 2001, Salim et al., 2013). Building upon this work Salim et al. (2013) undertook a process to map the thirteen learning objectives and divide them into measurable learning opportunities across the three learning domains. It is important to note that the division is not absolute because almost all learning activities involve more than one domain. The authors then further refined this work to produce the Laboratory Learning Objectives Measurement (LLOM) instrument (Nikolic et al., 2021) increasing the diversity of application, by changing keywords to reframe 4 the objectives into the appropriate context. LLOM comprises of nine statements in the cognitive domain including: understanding the operation of equipment/software used within the laboratory; reading and understanding datasheets/circuit-diagrams/ procedures/user-manuals/help-menus; and recognizing safety issues associated with laboratory experimentation. The psychomotor domain covers seven statements including: correctly conducting an experiment on [course equipment/ software name- e.g., power systems]; interpreting sounds, temperature, smells and visual cues to diagnose faults/errors; and taking the reading of the output from circuits/ instruments/simulations/ programs. The affective domain covers seven statements including considering ethical issues in laboratory experimentation and communication of discoveries; learning from failure (when experiment/simulation/code fails, or results are unexpected); and motivating oneself to complete experiments and learn from the laboratory activities. The full list of statements can be found in Nikolic et al. (2021). What can be observed from these statements is that multifaceted learning is occurring in the laboratory that may not be able to be addressed by assessment tasks that focus only on cognitive competencies. For example, does a laboratory report correctly identify a student’s ability to motivate oneself or learn from failure? Possibly it does, maybe just implicitly, it could simply depend on the stated aims of the assessment task. It may be possible to determine competency at the end of the laboratory session or series, but can these objectives be measured in a pre and post-test context to determine the amount of learning that has occurred? This investigation tries to answer these questions, by using LLOM as the theoretical basis of how items are classified across the three learning domains. As will be discussed in greater detail in the next section, this classification is implemented by decoding the learning objectives stated in the research articles with the stated assessment task and correlating them with the corresponding learning objective in LLOM. IV. METHOD Systematic literature reviews are an important and emerging area within engineering education and more are needed to keep pace with other fields and enable knowledge sharing (Henri et al., 2017, Jackson et al., 2018). The systematic process applied in this research analysis follows the guidance outlined by Froyd et al. (2015). The search of publications was limited to peer-reviewed journal articles. Initiation of this data analysis commenced by directly exploring by volume, engineering education journals listed in Scopus. This process allowed the authors to not only discover articles of relevance, but gather insights on search terms, probability of finding relevant articles, and developing the inclusion criteria. This initial search was targeted at high and medium ranked Scopus journals that included (in ranking order using 2019 CiteScore): The Journal of Engineering Education, IEEE Transactions on Education, European Journal of Engineering Education, Global Journal of Engineering Education, International Journal of Electrical Engineering Education and Australasian Journal of Engineering Education. These titles due to their global appeal and ranking nature, were expected to be the most likely to provide detailed learning objective specifications, assessment and student learning data within the research designs. Limited snowballing was undertaken to explore the suitability of other possible journals. Identified in this process was that some journals supported different outcomes (example, a focus on describing the implementation of new learning experiments without focusing on the learning). Such articles did not meet the inclusion criteria. Moreover, it was found that higher-ranking, engineering education focused journals in the field were most likely to meet the inclusion criteria, probably because of the stronger 5 evidence required for meeting review criteria. The lower ranked journals had relevant papers, but not strong enough to meet the inclusion criteria discussed further on. For example, between 2015 and 2020 zero percent of the identified articles in Global Journal of Engineering Education met the inclusion criteria (note: inclusion criteria and examples are discussed in more detail towards the end of this section). This was not unexpected, due to the higher evidence demands from the higher ranked journals. To further extend the search, databases were selected that included the identified journals. IEEE Xplore, Taylor and Francis and Wiley databases were used to search for relevant articles that may have been of the same quality but found in other related journals. While more articles may exist beyond the initial search and the database search, it was considered to be minimal based on the factors discussed and analysis conducted in Table I. It was of interest to understand recent research practices, but also to reflect on older research that might provide historical perspective. To provide this balance two decades of research were explored, limiting the criteria to those published between the years 2000-2020. Due to the timing of this research, in the year 2020 only papers published in the first half of the year were considered. Only teaching laboratory research conducted beyond K-12 was considered. The focus was on experimental design and not theoretical frameworks. The initial search approach discovered that a small set of common words could be used to identify targeted articles. Therefore, the keywords used to identify relevant publications were [laboratory] AND [assessment], [laboratory] AND [learning], and [laboratory] AND [achievement]. Articles were first screened by title and abstract for relevance. Only primary journal articles written in English directly related to engineering education were selected. A quick scan of the articles followed to ensure that they were either a LAI or NLI / CLM that clearly discussed laboratory assessment beyond student surveys. This is because the core objective of the systematic review is to synthesize what is known about student achievement and/or learning obtained through assessments and not through student perception. Any form of assessment was considered. At this stage, 85 papers were identified. A valuable insight to the changing dynamics of the evidence required to publish an engineering education article over time is shown in Table I. The table showcases the initial data collection analysis (identified during the quick scan) for papers found through IEEE Xplore between 2000-2005 and 2015-2020. IEEE Xplore was chosen for this insight as this database had the most relevant diverse range of journals fitting within the engineering scope. Between 2000-2005 the majority of papers included no student assessment or survey data and focused solely on the innovation itself. This trend has changed at IEEE Transactions on Education (ToE), but the use of assessment data is still non-existent in non-engineering education journals. As a positive, these journals have started to use survey data to support implementation outcomes. Other IEEE Education based journals had not been established in the first time period. While some articles include an assessment component, the emphasis of the papers remains on the innovation. This provides further justification of why the core engineering education journals such as IEEE Transactions on Education and European Journal of Engineering Education was central to this study. While such detailed analysis was not undertaken for other time periods or the other database searches, the results indicate a similar pattern found in the overall quick scan process. TABLE I – Analysis of Student Assessment & Survey Evidence found via IEEE Xplore (Comparing 2000-2005 & 2015-2020) Period Publication Assessment & Assessment Only Survey Only No Assessment or Total Papers Survey Survey 2000-2005 IEEE ToE 7% 0% 33% 59% 27 2000-2005 IEEE Other Education 0% 0% 0% 0% 0 2000-2005 Non-Education 0% 7% 0% 93% 15 6 2000-2005 Articles including an assessment component = 7% Articles including a student survey component =26% 2015-2020 IEEE ToE 67% 17% 17% 0% 18 2015-2020 IEEE Other Education 29% 0% 18% 53% 17 2015-2020 Non-Education 0% 0% 33% 67% 12 2015-2020 Articles including an assessment component = 43% Articles including a student survey component =57% The next step involved a detailed reading of each article to confirm that the inclusion criteria was met. Articles meeting the inclusion criteria were separated into LAI, NLI or CLM categories. For both NLI and CLM articles, the key inclusion requirement was that at least one real assessment was used, discussed and supported by appropriate learning objective background information. While NLI and CLM literature is plentiful, the overwhelming majority, especially from lower ranked education (Barzdenas et al., 2019, Davis et al., 2019) or any-ranked non-engineering education focused journals such as (Kotsampopoulos et al., 2017, Altalbe, 2019), was solely assessed using survey-based assessment approaches. While student perception via surveys is important, such approaches do not provide real student achievement or learning data. Many articles such as (Gustavsson et al., 2009, Stefanovic et al., 2015) spent substantial space talking about laboratory objectives, including the need to address the 13 core laboratory objectives outlined in Feisel and Rosa (2005), but failed to use such assessment within the analysis and were excluded. Some articles such as Azad (2007) used real assessment such as a pre and post-test, but provided little background about the tests and the learning objectives and were excluded. Articles such as Zine et al. (2018) mentioned assessment comparisons but provided very little detail. Papers that described a laboratory implementation, including discussing the expected learning outcomes, but were not supported by assessment evidence such as (Khubalkar et al., 2018, Forcan et al., 2018) were also not included. Two of the authors needed to agree that any article met the inclusion criteria. At the inclusion stage 32 papers were identified and analyzed. All analysis conducted, be it on the learning objectives or assessment, was solely based on the laboratory component. For example, in some articles such as in Leger (2019) the authors discuss an entire new course, but only information directly related to the laboratory is used and analyzed. Likewise, findings are only discussed in terms of the laboratory assessments used, and not the overall findings of the article. By singling out the laboratory components and ignoring any supporting student survey data, findings determined by the authors may deviate from those stated in the research article. The process undertaken to categorize the assessments to the cognitive, psychomotor or affective domain included understanding the stated learning objectives and the activities undertaken by the students. This provided context of what objectives the assessment tasks assessed, and the authors correlated them (to the best of their ability given the information available) with the corresponding learning objective in LLOM (Nikolic et al., 2021). Matches were made for explicit connections to the learning objectives. For example, if a written pre- and post- test was used it was highly likely that the assessment explicitly focused on cognitive outcomes such as understanding, knowledge and analysis. Some psychomotor achievement possibly could be linked but may have been so implicitly. A group presentation on the other hand may be correlated to the cognitive and affective domains based on the multifaceted layers of information generated by such an assessment. Like any systematic literature review this paper is limited by the search coverage and possible biases introduced during article selection, data extraction, analysis, and interpretations. The limited information provided in each article on learning objectives, rubrics and implementation may hide the wider reach of learning objectives captured than may be reported by the authors. Additionally, learning across the cognitive, psychomotor and affective domains cross-pollinate and can be difficult to separate distinctively. Moreover, assessments listed were those directly discussed in each article and in some cases such as in 7 (Nikolic et al., 2018a) it is clear that not all assessments were outlined in the paper. V. RESULTS The analysis to answer the research questions is summarized in Tables II, III and IV. It is important to note that the analysis and findings conducted for Tables II and III only relate to assessment components. The assessments needed to analyze real student learning or achievement data. All other measures including student surveys are not considered and therefore exclude the more holistic findings from the individual papers. This perspective makes this systematic review unique, especially compared to the many reviews (Brinson, 2015, Faulconer and Gruss, 2018) looking at differences in lab modes. The focus of this systematic review is on what assessments are used and what domain-based learning objectives they relate to, not which approach or mode is better and why. For the three tables the interpretation of each article’s learning objectives are classified across the cognitive, psychomotor or affective domain using the learning structure that outlines the types of learning conducted in each domain as documented in Nikolic et al. (2021) as reference. For Table II (NLI) and Table III (CLM) an ‘X’ denotes if a domain-based learning objective was listed, or an assessment was used. ‘V’ denotes if the domain-based learning objective or assessment was verified by assessment analysis. For Table IV (NLI) the domain-based objectives are identified as ‘E’ explicit or ‘I’ implicit within the use of the innovative assessment. A. What assessments are used to measure the learning benefits of new laboratory implementations (NLI)? A total of 16 articles met the inclusion criteria that focused on evaluating NLI. Only two of the articles (Wolf, 2010, Spanias and Atti, 2005) attempted to measure learning (pre and post assessment) with the rest comparing student achievement. The article by Wolf (2010) went a little further and tried to distinguish learning between the lecture and laboratory, highlighting that 45.9% of learning was attributable to the laboratory. While not directly an assessment, final grades were the most used tool to evaluate learning benefits (6 articles). Final grades provide a good overview of student achievement but fail to provide a clear picture for each individual learning objective. Learning objective definitions varied between articles, but most provided a good overview that could easily be classified across the three learning domains. At one end of the spectrum articles like Braun (2010) clearly state all official learning objectives, while articles such as Wolf (2010) require the reader to put together the pieces of the jigsaw puzzle. Connecting the learning objectives to accreditation outcomes as in Jacobson et al. (2006) helps readers extract more value by assisting readers to help connect the usefulness of the work to learning outcomes they are interested in. All 16 articles listed learning objectives associated with the cognitive domain and used assessments to measure achievement or learning. However, only nine articles listed learning objectives in the psychomotor or affective domain and from that only two were analyzed with an assessment task (psychomotor (Vial et al., 2015, Vojinovic et al., 2020), affective (Jacobson et al., 2006, Vojinovic et al., 2020)). This confirms that engineering educational researchers/practitioners are could be doing more to thoroughly evaluate the multifaceted benefits of the laboratory learning beyond the use of student evaluations (Nikolic et al., 2021). A variety of assessments were outlined and discussed across the articles, but many missed opportunities to engage and extract value by comparing and discussing student achievement. In most cases a description was provided with some general observations, but no quantitative analysis. Most common is the use of formal written lab reports (3 articles analyzed assessment data / 7 articles mentioned the assessment in total) (Leger, 2019, Jacobson et al., 2006, Clark and Mahboobin, 2018). This was followed by pre-lab quiz or 8 assessment (2/6) (Leger, 2019, Jacobson et al., 2006), project assessment (2/5) (Radu et al., 2011, Kellett, 2012), weekly post lab assessment or report (2/5) (Rodgers et al., 2020, Braun, 2010), in-class assessment or notebook entries (1/5) (Vojinovic et al., 2020), demonstration or interview (1/4) (Vojinovic et al., 2020), lab exam (1/1) (Vial et al., 2015), instructor observation (1/3) (Clark and Mahboobin, 2018), group presentation (1/1) (Jacobson et al., 2006) and online lab report (0/1). Five articles (Milano et al., 2008, Laverty et al., 2012, Nikolic et al., 2018a, Magnus et al., 2020, Khan et al., 2017) out of six focused on final grades only. In total 29 different assessments were used to analyze learning or achievement from a possible 53 opportunities (54%), highlighting a missed opportunity for more focused analysis. The remote and simulation-based articles had learning objectives that, as would be expected focused only on the cognitive domain. Assessment focused only on pre/post-tests or final grades, perfectly matched to cognitive objectives. Only 25% of the traditional-based articles had learning objectives that were limited to the cognitive domain, highlighting the greater learning flexibility that comes from face-to-face hands-on instruction. This was reflected in the greater diversity of assessments used to quantify achievement. A weakness was noticed in some articles that tried to make comparisons using student achievement, especially with final grades. A comparison between approaches would be made, but the composition of the assessment changed, such as in Magnus et al. (2020) leading to unequal correlations. The best assessments used to quantify achievement in the psychomotor and affective domains came from lab test, presentations, demonstrations or interviews to check knowledge, and demonstrator observations as found in (Laverty et al., 2012, Rodgers et al., 2020, Vial et al., 2015, Jacobson et al., 2006, Kellett, 2012, Vojinovic et al., 2020, Magnus et al., 2020, Clark and Mahboobin, 2018). Interestingly, only one lab test (Vial et al., 2015) was undertaken, suggesting a high logistical and workload impact of such assessment (Nikolic et al., 2015b). B. What assessments are used to measure the learning benefits when comparing laboratory modes (CLM)? A total of ten articles met the inclusion criteria that focused on evaluating CLM. The most noticeable difference between NLI and CLM articles was the focus on learning. Learning via pre and post tests were explored in 40% of CLM articles (Campbell et al., 2002, Kollöffel and de Jong, 2013, Lang et al., 2007, Shyr, 2010) compared to 12.5% of NLI articles. All the tests were written and could be considered as measuring cognitive learning only. The other 60% of articles explored differences in student achievement and those assessments were all interpreted as measuring cognitive outcomes. The other noticeable difference between NLI and CLM articles is the focus on explaining the learning objectives. In most cases the stated learning objectives were limited in description such as in Gamo (2019) and Abdel-Salam et al. (2006) making it harder to classify into the three learning domains. This is generally because outlining the new innovation and not the educational benefits were the major focus of such articles (Post et al., 2019). As expected, all CLM articles stated cognitive learning objectives and used assessments to investigate achievement or learning in that domain. Eight articles stated psychomotor objectives and two affective objectives. However, unlike the NLI articles, no assessments were used to analyze achievement or learning in the two domains. This is consistent with other findings that indicate non-traditional labs focus on content knowledge and understanding (Brinson, 2015). The absence of the lack of focus in the psychomotor and affective domains was seen via the lower variety of assessments that were used in CLM articles (7) compared to NLI (15). The most common assessments were comparing student achievement via exams (4/4) (Gamo, 2019, Kollöffel and de Jong, 2013, Abdel- Salam et al., 2006, Gurocak, 2001), together with the pre and post-test combination (3 analyzed / 4 total) (Campbell et al., 2002, Kollöffel and de Jong, 2013, Lang et al., 2007, Shyr, 2010). This was followed by 9 formal written laboratory reports (2/5) (Balakrishnan and Woods, 2013, Ogot et al., 2003), weekly assignment/report (2/3) (Abdel-Salam et al., 2006, Gurocak, 2001), post lab test (2/2) (Balakrishnan and Woods, 2013, Steger et al., 2020), final grades (1/1) (Gurocak, 2001), and in-class assessment or notebook (0/2). While final grades were the least used assessment to analyze achievement, for NLI it was the most used. In total 19 different assessments were used to analyze learning or achievement from a possible 25 opportunities (76%), highlighting that while the variety of assessments used in CLM compared to NLI is less, researchers made a greater effort to analyze the assessments available to them. Even though the assessments were only targeted at cognitive learning the contributions were very useful. The assessments show how the different modes can be used to target and aid specific areas of learning. In particular, articles such as (Campbell et al., 2002, Gamo, 2019, Kollöffel and de Jong, 2013) show the benefits of using different modes together. A more holistic investigation across all domains may show even greater benefits of combining modes. C. What are the new laboratory assessment innovations (LAI)? A total of six articles met the inclusion criteria that focused on presenting new laboratory assessment innovation. Inclusion did not require actual achievement / assessment evaluation as per the previous research questions due to the exploratory nature of this research question. Most noticeable is the fact that the articles are mainly published in the last six years. This confirms the growing awareness that more needs to be done on the assessment front (Loui, 2016). Three of the six articles (Garcia et al., 2005, Pardines et al., 2014, Ross, 2017) focused on administrative efficiencies that could be used to improve feedback. The focus of the three papers together with (Lal et al., 2017, Chen et al., 2018) was on improving in-class assessment with only Andersson and Weurlander (2019) focused on improving a post class assessment, the lab report. Therefore, driving improvements in in-class assessment appears to be the major area of interest within the engineering education community. Both (Garcia et al., 2005, Pardines et al., 2014) have a programming focus, looking to provide better feedback while reducing staff workload. This is achieved in Garcia et al. (2005) by subjecting code to a battery of tests to provide instant feedback and to allow students to improve with each resubmission. While Pardines et al. (2014) focusses more on checking understanding instead of working code through the delivery of online tests that provides relevant feedback. The idea was that this approach would better align with exam achievement, but that was not found. The use of such online tests was also explored in Chen et al. (2018) together with narrative portfolios that provide for more flexibility and student creativity. The portfolios, while being a good idea were limited by the associated marking rubric. The use of an in-class assessment designed to meet many lab learning objectives in Lal et al. (2017) was compared to lab reports. This group work approach was found to support the development of skills required for practical work, but the group work was found to hide poor performance that was discovered through the lab reports. By using reflective practice via peer review in Andersson and Weurlander (2019) with lab reports, greater value and greater development of higher order skills could be extracted from the use of lab reports. Such reflective practices have been found equally useful in other engineering settings (Nikolic et al., 2018b). In Ross (2017) the focus was to digitize the collection of in-class activity and this could provide efficiency improvements to the work outlined in Lal et al. (2017). Therefore, there is great interconnectivity between the works of many of these innovations, providing a pathway for further improvement. All six innovative works could be tied to assessing the cognitive domain with the works in (Garcia et al., 2005, Pardines et al., 2014, Lal et al., 2017, Chen et al., 2018) showing an explicit connection and (Ross, 2017, Andersson and Weurlander, 2019) showing an implicit connection because no evaluation of 10 achievement/learning was conducted. The work in Garcia et al. (2005) could be explicitly connected to the psychomotor domain due to the programming focus and the associated link of submitting working code. An implicit connection was seen in (Pardines et al., 2014, Lal et al., 2017, Ross, 2017, Chen et al., 2018). Assessing the affective domain was implicit in (Lal et al., 2017, Ross, 2017, Chen et al., 2018, Andersson and Weurlander, 2019). Therefore, the concentration of innovation remained on assessing cognitive learning. This provides an opportunity for future work to develop innovative approaches that explicitly assess the psychomotor and affective domains. There is also opportunity to possibly connect the many non-lab based assessments outlined in (Henri et al., 2017). D. Is it currently possible to holistically assess all laboratory learning objectives across the cognitive, psychomotor and affective domains? The driving motivation of this systematic literature review was to find a way to build upon the authors previous work (Nikolic et al., 2021) that used the Laboratory Learning Objectives Measurement instrument to holistically quantify student perceived learning across the three learning domains across all lab learning objectives. The next process in the scaffold was to quantify real learning via appropriate assessment tasks. To accomplish this for cognitive learning with written pre and post-tests was straight forward, but how to do this for the psychomotor and affective domain (especially overcoming administrative and logistical challenges) was unclear. Based on current practice extracted from the literature the use of lab tests, presentations, demonstration/interview and instructor observations appear to be the most suitable options to gain further understanding of lab learning across the affective and psychomotor domains. Vial et al. (2015) used a laboratory test and instructor observations to determine and observe changes to learning, Vojinovic et al. (2020) used observations of student performance, Jacobson et al. (2006) used group presentations and Clark and Mahboobin (2018) used instructor observation. These tests and observations allow the teaching staff to confirm student’s psychomotor and affective competencies. However, using such assessments may have administrative, logistical and scalability challenges, especially when trying to obtain a large sample. To measure learning a pre and post component would be needed. Using a written pre and post-test takes little and effort for both the student and teaching staff and it may be easy to get volunteer participation as required by ethics approval. However, undertaking an additional laboratory test, presentations or demonstration at the start of the first laboratory session is time consuming for both student and teaching staff, and this can lead to lower participation rates. Moreover, It could also be difficult to separate all objectives listed in Nikolic et al. (2021) using such approaches. Advancing knowledge in this area via student achievement instead of learning may be required. These challenges should not be seen as an obstacle, but simply a hurdle to be worked through. 11 TABLE II SUMMARY OF INCLUDED PAPERS OF NEW LABORATORY IMPLEMENTATIONS WITH LEARNING OBJECTIVES AND ASSESSMENTS MAPPED Object. Assessment Psychomotor Week A/Rep Lecture Quiz Postlab Test InC Ass / NB Pre Lab Q/A Demo/Inter Pre lab Test Final Grade For Wr Rep Group Pres Online Rep Ins Observ Cognitive Affective Lab Test Project Exam Paper Year Discipline/s Sample Purpose Finding Impact of new Changes in pre/post test scores show learning 1 2005 Elec 87 V X X V V simulation tool occurred Implementing Assessments are shown how they map and 2 2006 Multidiscipline unclear V X V V V V structured design exp. meet course objectives Impact of new Matlab New lab correlated with higher grades and 3 2008 Elec 184 V X V Toolbox more students passing the course Improve sustainability Students including sustainability analyses in lab 4 2010 Elec / Comp 111 V V V analysis skills reports score higher in lecture quiz Ensure new virtual lab Learning attributable 54.1% to lectures and 5 2010 Elec / Comp 29 V X V V V effective for learning 45.9% to laboratory Impact of unlimited C1/2: Both scores and project difficulty higher 6 2011 Elec / Comp unclear V X X X V V access to boards C3: Higher failure rates, but more difficult proj. Introduce embedded 7 2012 Elec 73 V X X X X V X Final grades generally high. devices Outlines a new project- Student project and exam achievement shown 8 2012 Comp unclear V X X X V V X based course to inform implementation success Impact of supporting Added resource corelated with better practical 9 2015 Elec / Tele 167 V V X X V V X multimedia resources performance, lab test and final results Explore new Matlab Adding simulation resulted in an improvement 10 2017 Elec unclear V X X X V simulation in final grades Achievement across Achievement correlated with perceived 11 2018 Multidiscipline 793 V X X X X V disciplines discipline relevance Impact of scaffolding Scaffolding PBL lead to both observed and 12 2018 Biomed 61 V V X V PBL project achievement improvements Address gaps in All students exceeded the standard on a 13 2019 Multidiscipline 36 V X X V V alternate energy edu combined assessment Impact of video prelab Prelab videos associated with better 14 2020 Chemical 817 V X V X resource preparation & higher weekly assessment marks Impact of using Tiered Student in-class and exam achievement 15 2020 Elec 515 V V V V V V Assignments increased compared to the traditional method Impact of hybrid PBL Hybrid approach associated with an increase in 16 2020 Elec 278 V X X X X X V X approach final grades compared to traditional Total (used in data analysis): 16 2 2 3 0 1 2 2 2 2 2 1 2 3 6 1 1 1 Total (outlined and used in lab): 16 9 9 7 1 5 5 5 6 2 2 1 2 3 6 1 4 3 ‘X’ denotes if a domain-based learning objective was listed or an assessment was used. ‘V’ denotes if the domain-based learning objective or assessment was verified by assessment analysis. Abbreviations: For Wr Rep=Formal Written Laboratory Reports, InC Ass/NB = In-class Assessment or Notebook Entries, Week A/Rep = Weekly Assignment or Report, Prelab Q/A = Prelab Quiz or Assessment, Group Pres = Group Presentation, Demo/Inter = Demonstration or Interview, Ins Observ = Instructor Observation 12 TABLE III - SUMMARY OF INCLUDED PAPERS OF COMPARING LABORATORY MODES WITH LEARNING OBJECTIVES AND ASSESSMENTS MAPPED (ABBREVIATIONS AS PER TABLE II) Object. Assessment Psychomotor Week A/Rep Lecture Quiz Postlab Test InC Ass / NB Pre Lab Q/A Demo/Inter Final Grade For Wr Rep Prelab Test Group Pres Ins Observ Cognitive Affective Lab Test On Rep Project Exam Paper Year Discipline/s Sample Purpose Finding Recorded Synchronous Student achievement similar between the 17 2001 Manufacturing 33 V X V V V vs Trad remote and local students Combined lab performed significantly better 18 2002 Elec / Comp 160 Sim + Trad vs Trad V X X X V V based on the written final test Student achievement based on lab reports 19 2003 Mech 70 Rem vs Trad V X V X showed no difference between modes Lab report achievement higher for recorded 20 2006 Mech / Civil 133 Recorded vs Trad V X V V lab students, but exam achievement similar Virtual lab students learn at least as much as 21 2007 Elec 104 Rem vs Trad V X V V trad lab students Based on pre/post learning was higher for the 22 2010 Tron 34 Sim vs Trad V X X X V V sim group Posttest showed that sim outperformed on 23 2013 Elec 86 Sim + Trad vs Trad V X V V V conceptual and procedural (calculations) skills. Simulation resulted in higher student 24 2013 Elec / Tele 55 Sim vs Trad V X X V V achievement due to data acquisition Exam performance regarding related questions 25 2019 Biomed 70 Sim + Trad vs Trad V X V increased for sim + trad Student achievement based on post tests 26 2020 Elec 129 Sim vs Trad V X V showed no difference between modes Total (used in data analysis): 10 0 0 2 0 0 2 0 0 4 6 0 0 4 1 0 0 0 Total (outlined and used in lab): 10 8 2 5 0 2 3 0 0 4 6 0 0 4 1 0 0 0 TABLE IV – SUMMARY OF INCLUDED PAPERS OF LABORATORY ASSESSMENT INNOVATIONS WITH LABORATORY OBJECTIVES MAPPED (E=EXPLICIT, I=IMPLICIT) Paper Year Discipline/s Sample Purpose C P A Why? Finding Automatic assessment including battery Approach to provide continuous and Each failed test gives students enough feedback to correct 27 2005 Comp 2053 E E of tests & plagiarism detection therefore more feedback their laboratory work and to learn from their mistakes Improve the subjective nature of in- Improve admin efficiency and The lab assess measures the knowledge acquired by the 28 2014 Comp 159 E I class assessment workload & better check learning students accurately, but can't correlate to exam Comparing an in-class assessment In-class assess supports development of skills req for prac Chem / Civ / Mech Understand if in-class assess has 29 2017 259 (group work) against lab report E I I work. Group method can hide poor performance. Time / Min / Petro learning advantages over reports (individual) issue Improve in-class assessment to provide Improve efficiency, accuracy, data No learning evaluation, but the administrative benefits 30 2017 Tron 42 I I I more timely feedback integrity and Moodle integration clear leading to better feedback Investigate formative electronic lab Understand if this approach is better The new approach resulted in higher student achievement 31 2018 Elec / Comp 68 E I I Assessments with narrative portfolio than standard lab reports against traditional reports (less deep learning) Improve value of lab reports by using Improve communication / technical No learning evaluation, but the benefits of reflective 32 2019 Elec 27 I I peer review and reflection quality and reflection practice incorporates many more skills 13 VI. GENERAL DISCUSSION The systematic process has showcased the important role highly ranked engineering education journals have on driving our understanding of learning in the laboratory. As was shown in Table I, in the last 5 years 84% of IEEE ToE articles had an assessment component compared to 0% in non-engineering education focused journals found in IEEE Xplore. Unfortunately, the majority of non-engineering education journals do not even include a student evaluation component, but this has improved from 2000-2005. With 57% share (just within the IEEE Xplore database), student surveys currently play a leading role in driving our understanding of laboratory learning. The data presented in Table I was before the detailed analysis culling stage, hiding the fact that some papers with an assessment component had minimal evidence and/or analysis, with the lion share of data being driven by student surveys. If articles from lower ranked engineering education journals were included the share of papers with a student survey focus would be higher than 57%. When comparing the current articles in IEEE Xplore with an assessment component (43%), to those found between 2000 and 2005 (7%), a positive improvement in learning analysis practice is seen. Therefore, the community is moving in the right direction and hopefully the analysis and recommendations from this systematic review will help improve practice further. Included articles concentrated primarily within the Journal of Engineering Education, European Journal of Engineering Education, and IEEE Transactions on Education across the twenty-year period. Over the last five years articles also started appearing regularly in the Australasian Journal of Engineering Education. Laboratory articles found in lower ranked engineering education journals or any ranked non-engineering education journals such as those found on IEEE Xplore generally focused primarily on the technical aspects of the innovation. If the innovation attempted to provide learning evidence, it was highly likely to be based on student perception. If learning or achievement was used as evidence, those studies generally provided very basic, unclear assessment and/or learning objective information. From hundreds of possible articles, only a handful meaningfully incorporated assessment data analysis together with a sufficient outline and discussion on learning objectives to the standard applied for inclusion. Many articles tended to talk a lot about learning objectives but did not follow through with supporting evidence showing that the learning objectives had been met. Linking the learning objectives with the assessment analysis would strengthen the research outcomes by telling the community more about the learning being achieved. Student perception of learning is important and provides great insights into learning occurring across the three domains. The authors themselves have undertaken substantial work to show the reliability and usefulness of such data (e.g., (Nikolic et al., 2015a, Nikolic et al., 2017, Nikolic, 2015)). However, it is clear more attention is required on analyzing assessment data to improve our holistic understanding. In many cases multiple assessments were discussed, but analysis was based only on final grades, creating a missed opportunity for greater insight. The limited articles and sample sizes summarized in Tables II and III suggest that more data would support the communities understanding of learning occurring in the laboratory. This finding supports the statements made by Loui (2016) of the need for the engineering education community to increase efforts in this area. Therefore, a recommendation is made that journals insist that future research incorporate greater discussion of the learning objectives and link this explicitly to at least one form of assessment analysis. More understanding of learning can only come from increased efforts to extract this information from analyzing performance through assessments. The assessment should be designed to determine the competency of the stated learning objective. Increased efforts to do this will help build empirical evidence supporting the learning benefits of the laboratory. 14 In many papers it was extremely difficult to extract the learning objectives and then map them to the assessment tasks. In many cases just obtaining a clear understanding of the learning objectives required some detective work. In other cases, the outlining of how the assessment tasks aligned to the learning objectives tended to be limited. Providing this information in a clearer way would help readers better align the research outcomes with their own work. Therefore, a recommendation is made that future research incorporate a mapping between assessment tasks and learning objectives in an easy-to-follow manner. An exemplar of such an approach can be found in Jacobson et al. (2006). One possible structure recommended by the authors is shown in Table V. Such a structure would allow the reader to gain a very detailed understanding of the learning occurring in the laboratory and the learning evidence being produced by the assessment. Table V: Recommended Assessment/Learning Objective Mapping Format (example of implementation included on second row) Course Learning Accreditation Criteria LLOM objectives linked Laboratory Assessments LLOM objectives linked Objective (CLO) linked to linked to the to CLO / Laboratory used in the Laboratory (explicitly) to Laboratory the Laboratory Laboratory CLO design Assessment 1. Demonstrate 1. An ability to apply 1. Understand the Formal written laboratory 3. Analyze the results appropriate laboratory knowledge of math, operation of report (20% of final from an experiment skills science and equipment/software grade) 4. Write a laboratory engineering used within the report in a professional 2. An ability to laboratory manner function on multi- 2. Read and understand disciplinary teams datasheets and circuit- 3. An ability to use the diagrams techniques, skills, and 3. Analyze the results modern engineering from an experiment tools necessary for 4. Write a laboratory engineering practice report in a professional manner 5. Correctly conduct an experiment on digital hardware 6. Construct a working digital hardware circuit 7. Interpret sounds, temperature, smells, and visual cues to diagnose faults/errors 8. Work in a team to conduct experiments, diagnose problems, and analyze results 9. Learn from failure Related to the previous observation is the fact that there was little to no explanation of why assessment tasks were selected to demonstrate the competencies for any given learning objective. Seventy three percent of articles either applied a formal written report or a weekly assessment /report. If prelab quizzes are excluded for their preparatory nature, then 50% of articles relied on just one assessment. This highlights a lack of imagination, or more possibly the lack of understanding of the learning benefits associated with different laboratory assessments beyond the historical status quo. If such a ratio is applied to all laboratory components of all subjects in the makeup of a degree, are the students being faced with substantial 15 repetition and reevaluation of the same competencies, just with a different theoretical component? In addition, it is not clear how each assessment addresses learning in the different domains. For example, a lab report can explicitly confirm cognitive knowledge but could also implicitly be used to confirm a student had achieved the psychomotor skills to build and measure in the first place. This is dependent on the design and aims. Therefore, a recommendation is made that research be performed to use the assessment types identified in this systematic review and map them explicitly and implicitly to the cognitive, psychomotor, and affective competencies being achieved. Such research would provide academics with greater awareness of the available assessments on offer and how they address different learning competencies, providing more informed assessment selection. The fundamental basis of scientific research design is to implement experimental approaches free of bias. The authors do not advocate for any one mode, and are supportive of multi-mode implementations, but to make informed decisions strengths and weaknesses must be clearly identified. Research implementations discovered in this systematic review show a clear bias to cognitive learning. For example, without surprise 100% of comparing laboratory modes articles used approaches that focused on cognitive learning; this is the easiest way to analyze assessment data. For example, it is quick and easy for researchers to undertake a small written or multiple choice pre- and post- test, compared to undertaking a pre- and post- laboratory exam. Determining how to overcome this bottleneck was a key driver for this systematic review. A cognitive focus is completely suitable if the chosen objectives are focused on cognitive principles, but scientifically the impact on all forms of learning should be investigated and reported on accordingly. Earlier work by the authors (Nikolic et al., 2021) suggests that non-traditional formats can still lead to perceived improvements in psychomotor and affective learning. If these perceptions can be validated by real learning it will provide greater understanding of the holistic learning occurring in non-traditional modes. By considering all domains whether the learning is expected or not, this would provide objective, holistic evidence allowing informed choice. Any one laboratory does not need to be a master of all things but work collectively with all other laboratories taught in the degree structure. An informed choice and better learning domain understanding can be used to create the necessary balance. Therefore, a recommendation is made that future investigations try and remove the cognitive bias and use assessment data that also compares psychomotor and affective learning, for better or for worse. This understanding could lead to better balance and greater use of mixed mode approaches within learning practice. A step towards removing the cognitive bias is related to the previous recommendation of mapping the learning competencies associated with each assessment task. Greater understanding of how assessments are being used, and what learning domain they map to, will identify opportunities for new assessment innovations. Preliminary data extracted in this systematic review was that laboratory tests, presentations, demonstration/interview, and instructor observations was a possible pathway forward but may be limited by logistical constraints. For example, conducting a laboratory test with limited hardware/workstations can require multiple repeated test sessions leading to many problems including leakage of question types and answers. While new laboratory assessment innovations have accelerated in recent years, the number found (5 in the last 5 years) is still relatively small and they are still primarily focused on cognitive areas of learning. Therefore, a recommendation is made for research into new assessment innovations that provide a low logistics and workload solution to measuring psychomotor and affective competencies in the laboratory. Such an innovation would not only be useful for bridging the gap in the greater use of holistic assessment, but it would also provide a pathway to pre-/post-test opportunities to measure learning in the two domains. While new assessment innovations are welcome, the authors know from their own interactions with the engineering education community that the list of assessments outlined in this paper falls short of the variety 16 of assessments being used in the field. This could be for reasons such as: not having the time or skill set, due to the non-education research focus of the academic; the struggle to collect an appropriately sized research sample for journal publication; or the administrative strains of obtaining ethics clearance. Therefore, a recommendation is made for research that extends out to the community to discover the types of assessments being used in engineering education beyond those identified in this systematic review. VII. CONCLUSION The systematic review conducted led to several important observations, resulting in several beneficial recommendations to move understanding in this area forward. Firstly, it is important to note that laboratory research conducted to date, regardless of the limitations identified in this investigation, indicates a positive learning experience occurs through experimentation regardless of the mode. By slightly adjusting research practices, the engineering communities understanding of laboratory learning can be better understood. When it comes to learning, laboratory-based engineering education research is scattered, with an individualist approach applied that focuses on the innovative implementation. A concentration of learning evidence (if sought at all) through student surveys is evident in lower ranked or non-engineering education journals, highlighted by the small number of journals that had articles meeting the inclusion criteria. While student surveys provide a quick and easy solution to gather scientific evidence, greater understanding of laboratory learning can be gained through more comprehensive assessment analysis. However, this study has shown that over the last twenty years substantial progress has been made on improving learning evidence in the laboratory. If the recommendations from this study are implemented, learning will play an even greater role in laboratory research into the future. Attempts to provide supporting learning evidence is limited by the research design and information presented, highlighting the need for the suggested good practice guidelines. With learning objectives and assessment data being tailored to the individual implementation, it is hard to synthesize and capture the holistic multi-domain learning strengths and weaknesses across the board. Small changes to research approaches can unwind this limitation, providing an opportunity for the community to come together and better appreciate the learning role the laboratory plays in the engineering curriculum. When a better understanding of learning is achieved, it is then possible to gain a better appreciation of the different forms of assessments available for implementation. Assessments could then be fitted for purpose instead of always relying on the trusted old written laboratory report. As a result, several recommendations have been made, creating a platform for the community to not only address some weak links, but to also conduct laboratory research in a structure that can be easily synthesized. This will take our understanding of laboratory learning to the next level. Without this systematic review, these observations would have just remained as assumptions or opinions. Synthesized evidence has now been provided to enable action and make a difference. 17 TABLE NUMBERING TO REFERENCES The following table provides the link between the reference numbers in Tables II, III and IV to the reference list. Number Reference Journal 1 Spanias and Atti (2005) IEEE ToE 2 Jacobson et al. (2006) IEEE ToE 3 Milano et al. (2008) IEEE ToE 4 Braun (2010) IEEE ToE 5 Wolf (2010) IEEE ToE 6 Radu et al. (2011) IEEE ToE 7 Laverty et al. (2012) EJEE 8 Kellett (2012) IEEE ToE 9 Vial et al. (2015) AJEE 10 Khan et al. (2017) IEEE ToE 11 Nikolic et al. (2018a) AJEE 12 Clark and Mahboobin (2018) IEEE ToE 13 Leger (2019) IEEE ToE 14 Rodgers et al. (2020) EJEE 15 Vojinovic et al. (2020) IEEE ToE 16 Magnus et al. (2020) IEEE ToE 17 Gurocak (2001) JEE 18 Campbell et al. (2002) JEE 19 Ogot et al. (2003) JEE 20 Abdel-Salam et al. (2006) EJEE 21 Lang et al. (2007) EJEE 22 Shyr (2010) EJEE 23 Kollöffel and de Jong (2013) JEE 24 Balakrishnan and Woods (2013) EJEE 25 Gamo (2019) IEEE ToE 26 Steger et al. (2020) IEEE ToE 27 Garcia et al. (2005) IEEE ToE 28 Pardines et al. (2014) IEEE RITA 29 Lal et al. (2017) AJEE 30 Ross (2017) AJEE 31 Chen et al. (2018) IEEE ToE 32 Andersson and Weurlander (2019) EJEE 18 REFERENCES ABDEL-SALAM, T., KAUFFMAN, P. J. & CROSSMAN, G. 2006. Does the lack of hands-on experience in a remotely delivered laboratory course affect student learning? European Journal of Engineering Education, 31, 747-756. ALTALBE, A. A. 2019. Performance Impact of Simulation-Based Virtual Laboratory on Engineering Students: A Case Study of Australia Virtual System. IEEE Access, 7, 177387-177396. ANDERSON, L. W., KRATHWOHL, D. R., AIRASIAN, P. W., CRUIKSHANK, K. A., MAYER, R. E., PINTRICH, P. R., RATHS, J. & WITTROCK, M. C. 2001. A taxonomy for learning, teaching, and assessing: A revision of Bloom's taxonomy of educational objectives, abridged edition. White Plains, NY: Longman. ANDERSSON, M. & WEURLANDER, M. 2019. Peer review of laboratory reports for engineering students. European Journal of Engineering Education, 44, 417-428. AUSTRALIAN GOVERNMENT DEPARTMENT OF EDUCATION AND TRAINING. 2017. 2016 Employer Satisfaction Survey [Online]. Available: https://www.qilt.edu.au/docs/default-source/gos-reports/2017/ess-2016-national-report- final.pdf?sfvrsn=f0e0e33c_6 [Accessed]. AZAD, A. 2007. Delivering a remote laboratory course within an undergraduate program. International Journal of Online and Biomedical Engineering (iJOE), 3. BALAKRISHNAN, B. & WOODS, P. C. 2013. A comparative study on real lab and simulation lab in communication engineering from students’ perspectives. European Journal of Engineering Education, 38, 159-171. BARZDENAS, V., GRAZULEVICIUS, G. & VASJANOV, A. 2019. TCAD tools in undergraduate studies: A laboratory work for learning deep submicron CMOS processes. The International Journal of Electrical Engineering & Education, 57, 133-163. BOUD, D. 1995. Ensuring that assessment contributes to learning. Proceedings. International confrence on problem-based learning in higher education. University of Linköping. Sweden. BRAUN, D. 2010. Teaching Sustainability Analysis in Electrical Engineering Lab Courses. IEEE Transactions on Education, 53, 243-247. BRINSON, J. R. 2015. Learning outcome achievement in non-traditional (virtual and remote) versus traditional (hands-on) laboratories: A review of the empirical research. Computers & Education, 87, 218-237. CAMPBELL, J. O., BOURNE, J. R., MOSTERMAN, P. J. & BRODERSEN, A. J. 2002. The Effectiveness of Learning Simulations for Electronic Laboratories. Journal of Engineering Education, 91, 81-87. CHEN, B., DEMARA, R. F., SALEHI, S. & HARTSHORNE, R. 2018. Elevating Learner Achievement Using Formative Electronic Lab Assessments in the Engineering Laboratory: A Viable Alternative to Weekly Lab Reports. IEEE Transactions on Education, 61, 1-10. CLARK, R. M. & MAHBOOBIN, A. 2018. Scaffolding to Support Problem-Solving Performance in a Bioengineering Lab—A Case Study. IEEE Transactions on Education, 61, 109-118. DAVIS, C., YOUNES, R. & BAIRAKTAROVA, D. 2019. Lab in a box: Redesigning an electrical circuits course by utilizing pedagogies of engagement. The International journal of engineering education, 35, 436-445. FAULCONER, E. K. & GRUSS, A. B. 2018. A review to weigh the pros and cons of online, remote, and distance science laboratory experiences. International Review of Research in Open and Distributed Learning, 19. FEISEL, L., PETERSON, G. D., ARNAS, O., CARTER, L., ROSA, A. & WOREK, W. Learning objectives for engineering education laboratories. Frontiers in Education, 2002. FIE 2002. 32nd Annual, 2002 2002. F1D-1 vol.2. FEISEL, L. D. & ROSA, A. J. 2005. The Role of the Laboratory in Undergraduate Engineering Education. Journal of Engineering Education, 94, 121-130. FORCAN, M., BANJANIN, M. & VUKOVIĆ, G. 2018. Advanced teaching method for balanced operations of overhead transmission lines based on simulations and experiment. The International Journal of Electrical Engineering & Education, 55, 14-30. FROYD, J. E., FOSTER, M. J., MARTIN, J. P., BORREGO, M., CHOE, H. S. & CHEN, X. Special session: Introduction to systematic reviews in engineering education research. 2015 IEEE Frontiers in Education Conference (FIE), 21-24 Oct. 2015 2015. 1-3. GAMO, J. 2019. Assessing a Virtual Laboratory in Optics as a Complement to On-Site Teaching. IEEE Transactions on Education, 62, 119-126. GARCIA, A., RODRIGUEZ, S., ROSALES, F. & PEDRAZA, J. L. 2005. Automatic management of laboratory work in mass computer engineering courses. IEEE Transactions on Education, 48, 89-98. GUROCAK, H. 2001. e-Lab: An Electronic Classroom for Real-Time Distance Delivery of a Laboratory Course. Journal of Engineering Education, 90, 695-705. GUSTAVSSON, I., NILSSON, K., ZACKRISSON, J., GARCIA-ZUBIA, J., HERNANDEZ-JAYO, U., NAFALSKI, A., NEDIC, Z., GOL, O., MACHOTKA, J., PETTERSSON, M. I., LAGO, T. & HAKANSSON, L. 2009. On Objectives of 19 Instructional Laboratories, Individual Assessment, and Use of Collaborative Remote Laboratories. IEEE Transactions on Learning Technologies, 2, 263-274. GUZZOMI, A. L., MALE, S. A. & MILLER, K. 2017. Students’ responses to authentic assessment designed to develop commitment to performing at their best. European Journal of Engineering Education, 42, 219-240. HARGREAVES, D. J. 1997. Student Learning and Assessment Are Inextricably Linked. European Journal of Engineering Education, 22, 401-409. HASSAN, O. A. B. 2011. Learning theories and assessment methodologies – an engineering educational perspective. European Journal of Engineering Education, 36, 327-339. HENRI, M., JOHNSON, M. D. & NEPAL, B. 2017. A Review of Competency-Based Learning: Tools, Assessments, and Recommendations. Journal of Engineering Education, 106, 607-638. JACKSON, T., NIKOLIC, S., SHEN, J. & XIA, G. Knowledge sharing in digital learning communities: a comparative review of issues between education and industry. 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE), 2018. IEEE, 783-787. JACOBSON, M. L., SAID, R. A. & REHMAN, H. 2006. Introducing design skills at the freshman level: structured design experience. IEEE Transactions on Education, 49, 247-253. KELLETT, C. M. 2012. A Project-Based Learning Approach to Programmable Logic Design and Computer Architecture. IEEE Transactions on Education, 55, 378-383. KHAN, S., JAFFERY, M. H., HANIF, A. & ASIF, M. R. 2017. Teaching Tool for a Control Systems Laboratory Using a Quadrotor as a Plant in MATLAB. IEEE Transactions on Education, 60, 249-256. KHUBALKAR, S., JUNGHARE, A., AWARE, M. & DAS, S. 2018. Unique fractional calculus engineering laboratory for learning and research. The International Journal of Electrical Engineering & Education, 57, 3-23. KOLLÖFFEL, B. & DE JONG, T. 2013. Conceptual Understanding of Electrical Circuits in Secondary Vocational Engineering Education: Combining Traditional Instruction with Inquiry Learning in a Virtual Lab. Journal of Engineering Education, 102, 375-393. KOTSAMPOPOULOS, P. C., KLEFTAKIS, V. A. & HATZIARGYRIOU, N. D. 2017. Laboratory Education of Modern Power Systems Using PHIL Simulation. IEEE Transactions on Power Systems, 32, 3992-4001. LAL, S., LUCEY, A. D., LINDSAY, E. D., SARUKKALIGE, P. R., MOCERINO, M., TREAGUST, D. F. & ZADNIK, M. G. 2017. An alternative approach to student assessment for engineering–laboratory learning. Australasian Journal of Engineering Education, 22, 81-94. LANG, D., MENGELKAMP, C., JÄGER, R. S., GEOFFROY, D., BILLAUD, M. & ZIMMER, T. 2007. Pedagogical evaluation of remote laboratories in eMerge project. European Journal of Engineering Education, 32, 57-72. LAVERTY, D. M., MILLIKEN, J., MILFORD, M. & CREGAN, M. 2012. Embedded C programming: a practical course introducing programmable microprocessors. European Journal of Engineering Education, 37, 557-574. LEGER, A. S. 2019. A Multidisciplinary Undergraduate Alternative Energy Engineering Course. IEEE Transactions on Education, 62, 34-39. LINDSAY, E. D. & GOOD, M. C. 2005. Effects of laboratory access modes upon learning outcomes. IEEE Transactions on Education, 48, 619-631. LOUI, M. C. 2016. Board Changes and Neglected Research Topics. Journal of Engineering Education, 105, 3-5. MA, J. & NICKERSON, J. V. 2006. Hands-on, simulated, and remote laboratories: A comparative literature review. ACM Computing Surveys (CSUR), 38, 7-es. MAGNUS, D. D. M., CARBONERA, L. F. B., PFITSCHER, L. L., FARRET, F. A., BERNARDON, D. P. & TAVARES, A. A. 2020. An Educational Laboratory Approach for Hybrid Project-Based Learning of Synchronous Machine Stability and Control: A Case Study. IEEE Transactions on Education, 63, 48-55. MEMIK, E. & NIKOLIC, S. 2021. The virtual reality electrical substation field trip: Exploring student perceptions and cognitive learning. STEM Education, 1, 47. MILANO, F., VANFRETTI, L. & MORATAYA, J. C. 2008. An Open Source Power System Virtual Laboratory: The PSAT Case and Experience. IEEE Transactions on Education, 51, 17-23. NIKOLIC, S. 2015. Understanding How Students Use and Appreciate Online Resources in the Teaching Laboratory. International Journal of Online Engineering, 11, 8-13. NIKOLIC, S., LEE, M. J. W., GOLDFINCH, T. & RITZ, C. H. Addressing Misconceptions About Engineering Through Student– Industry Interaction in a Video-Augmented 3D Immersive Virtual World. Frontiers in Education Conference (FIE), 2016, 2016. IEEE. NIKOLIC, S., RITZ, C., VIAL, P. J., ROS, M. & STIRLING, D. 2015a. Decoding Student Satisfaction: How to Manage and Improve the Laboratory Experience. IEEE Transactions on Education, 58, 151-158. 20 NIKOLIC, S., ROS, M. & HASTIE, D. B. 2018a. Teaching programming in common first year engineering: discipline insights applying a flipped learning problem-solving approach. Australasian Journal of Engineering Education, 23. NIKOLIC, S., STIRLING, D. & ROS, M. 2018b. Formative assessment to develop oral communication competency using YouTube: self- and peer assessment in engineering. European Journal of Engineering Education, 43, 538-551. NIKOLIC, S., SUESSE, T., GOLDFINCH, T. & MCCARTHY, T. 2015b. Relationship between Learning in the Engineering Laboratory and Student Evaluations. Australasian Association for Engineering Education Annual Conference. Geelong, Australia. NIKOLIC, S., SUESSE, T., JOVANOVIC, K. & STANISAVLJEVIC, Z. 2021. Laboratory Learning Objectives Measurement: Relationships Between Student Evaluation Scores and Perceived Learning. IEEE Transactions on Education, 64, 163- 171. NIKOLIC, S., SUESSE, T., MCCARTHY, T. & GOLDFINCH, T. 2017. Maximising Resource Allocation in the Teaching Laboratory: Understanding Student Evaluations of Teaching Assistants in a Team Based Teaching Format. European Journal of Engineering Education, 42, 1277-1295. OGOT, M., ELLIOTT, G. & GLUMAC, N. 2003. An Assessment of In-Person and Remotely Operated Laboratories. Journal of Engineering Education, 92, 57-64. PARDINES, I., SANCHEZ-ELEZ, M., MARTÍNEZ, D. A. C. & GÓMEZ, J. I. 2014. Online Evaluation Methodology of Laboratory Sessions in Computer Science Degrees. IEEE Revista Iberoamericana de Tecnologias del Aprendizaje, 9, 122-130. POST, L. S., GUO, P., SAAB, N. & ADMIRAAL, W. 2019. Effects of remote labs on cognitive, behavioral, and affective learning outcomes in higher education. Computers & Education, 140, 103596. RADU, M. E., COLE, C., DABACAN, M. A., HARRIS, J. & SEXTON, S. 2011. The Impact of Providing Unlimited Access to Programmable Boards in Digital Design Education. IEEE Transactions on Education, 54, 174-183. RODGERS, T. L., CHEEMA, N., VASANTH, S., JAMSHED, A., ALFUTIMIE, A. & SCULLY, P. J. 2020. Developing pre- laboratory videos for enhancing student preparedness. European Journal of Engineering Education, 45, 292-304. ROSS, R. 2017. MoodleNFC – integrating smart student ID cards with moodle for laboratory assessment. Australasian Journal of Engineering Education, 22, 73-80. SALIM, K. R., ALI, R., HUSSAIN, N. H. & HARON, H. N. 2013. An Instrument for Measuring the Learning Outcomes of Laboratory Work. International Engineering and Technology Education Conference. Ho Chi Minh City, Vietnam. SHYR, W.-J. 2010. Multiprog virtual laboratory applied to PLC programming learning. European Journal of Engineering Education, 35, 573-583. SPANIAS, A. & ATTI, V. 2005. Interactive online undergraduate laboratories using J-DSP. IEEE Transactions on Education, 48, 735-749. STEFANOVIC, M., TADIC, D., NESTIC, S. & DJORDJEVIC, A. 2015. An assessment of distance learning laboratory objectives for control engineering education. Computer Applications in Engineering Education, 23, 191-202. STEGER, F., NITSCHE, A., ARBESMEIER, A., BRADE, K. D., SCHWEIGER, H. & BELSKI, I. 2020. Teaching Battery Basics in Laboratories: Hands-On Versus Simulated Experiments. IEEE Transactions on Education, 63, 198-208. VIAL, P. J., NIKOLIC, S., ROS, M., STIRLING, D. & DOULAI, P. 2015. Using Online and Multimedia Resources to Enhance the Student Learning Experience in a Telecommunications Laboratory within an Australian University. Australasian Journal of Engineering Education, 20, 71-80. VOJINOVIC, O., SIMIC, V., MILENTIJEVIC, I. & CIRIC, V. 2020. Tiered Assignments in Lab Programming Sessions: Exploring Objective Effects on Students’ Motivation and Performance. IEEE Transactions on Education, 63, 164-172. WOLF, T. 2010. Assessing Student Learning in a Virtual Laboratory Environment. IEEE Transactions on Education, 53, 216-222. ZINE, O., ERROUHA, M., ZAMZOUM, O., DEROUICH, A. & TALBI, A. 2018. SEITI RMLab: A costless and effective remote measurement laboratory in electrical engineering. The International Journal of Electrical Engineering & Education, 56, 3-23.