Citeerwijze van dit artikel:
Mr.dr. Emanuel van Dongen and Dr. Femke Kirschner, ‘Blended Learning in Legal Education’, 2020, april-juni, DOI: 10.5553/REM/.000048

DOI: 10.5553/REM/.000048

Law and MethodAccess_open

Article

Blended Learning in Legal Education

Using Scalable Learning to Improve Student Learning

Keywords legal education, blended learning, Scholarship of Teaching and Learning, student learning
Authors
DOI
Show PDF Show fullscreen
Abstract Author's information Statistics Citation
This article has been viewed times.
This article been downloaded 0 times.
Suggested citation
Mr.dr. Emanuel van Dongen and Dr. Femke Kirschner, 'Blended Learning in Legal Education', LaM May 2020, DOI: 10.5553/REM/.000048

    Education should be aimed at supporting student learning. ICT may support student learning. It also may help students to learn and increase their involvement and thus their efforts. Blended learning has the potential to improve study behaviour of students, thus becoming an indispensable part of their education. It may improve their preparation level, and as a result, face-to-face education will be more efficient and more profound (e.g. by offering more challenging tasks), lifting the learning process to a higher level. Moreover, the interaction between students and teachers may be improved by using ICT. A necessary condition to lift students’ learning to a higher (better: deeper) learning level is that all students acquire basic knowledge before they engage in face-to-face teaching. In a First-Year Course Introduction to Private Law, we recently introduced a Scalable Learning environment. This environment allows the acquiring and testing of factual knowledge at individual pace, in a modern and appealing way (independent of time and place). The link between offline and online education during face-to-face teaching is made by using Learning Analytics, provided by the Scalable Learning environment. After the implementation of Scalable Learning, a survey on its effect on learning has been performed by means of questionnaires. The results were compared at the beginning and at the end of the course, related to the approaches taken by teachers as well as to the exam results. This article presents the outcomes of this study.

Dit artikel wordt geciteerd in

    • 1. Introduction

      In contemporary society, digitalisation is proceeding at the speed of light. In this rapidly changing environment, it is important to keep working and reflecting on the use of digitalisation in (academic) education. After all, digitalisation can be a means to improve the quality of education.1xIn this way, see the letter of the Dutch Minister of Education, Culture and Science on digitalisation of 16 October 2018. Enhanced by the financial impulse by the Executive Board to the Educate-it program,2xwww.uu.nl/nieuws/educate-it-krijgt-extra-miljoenen-voor-verdere-digitalisering-onderwijs (last accessed on 19 May 2019). Utrecht University, therefore, focuses on blended learning as an integral part of its education.3xUtrecht University, Strategic Plan 2016-2020, available at www.uu.nl (last accessed on 19 May 2019). This phenomenon is not limited to Utrecht University, but can also be seen elsewhere in the Netherlands and abroad. Blended learning can be defined as “a formal education program in which a student learns at least in part through online delivery of content and instruction with some element of student control over time, place, path, and/or pace and at least in part at a supervised brick-and-mortar location away from home.”4xStaker & Horn 2012, p. 3. Critical on the term blended learning are Oliver & Trigwell 2005, who defend subverting the term and using it to describe an approach that focuses on the learner and its learning (instead of on the teacher). These authors suggest that an in-depth analysis of the variation in experience of learning of students in a blended learning context is needed in the future.
      Blended learning has received quite some attention in the Netherlands over the past years in the area of study of law. De Vries, Director of Education of the Department of Law at the Faculty of Law, Economics and Governance (LEG) of Utrecht University, recently emphasized the added value of digital resources in (legal) education, at least as long as they serve the study of law. “Blended learning, as a structural application in legal education, allows students to master the law at a higher level. In this way, students can get into a study rhythm that allows them to connect the scarce contact moments with each other”, according to De Vries.5xDe Vries 2019 (our translation). However, one can question whether blended learning actually contributes to the students learning process and the quality of education?6xFurthermore, Schutgens 2019 stated that old-fashioned live-teaching and, above all, having offline students have their advantages. First, a discussion of the educational context, i.e. the importance of the focus on student’ learning and the effects and possibilities of blended learning will follow. Second, the teaching background and context as well as the pilot including the Scalable Learning environment will be described. Third, the methods will be discussed, to be followed by the results and evaluation. Finally, in the final discussion (‘conclusion’) some recommendations will be provided as to where to find (further) possibilities to stimulate students towards a deep approach to their learning.

    • 2. Educational Context: Blended Learning and the Effects on Student’ Learning

      2.1. Focus on Student’ Learning

      All education, whether online or offline, should be aimed at supporting the learning process of students. As Biggs and Tang state, the focus should be on what students do, not primarily on what teachers do; what teachers do should serve student’ learning.7xBiggs & Tang 2011. In educational literature, a common distinction is made between deep and surface approaches to learning.8xBiggs 1987b; Biggs & Tang 2011, esp. p. 24 et seqq. In a surface approach to learning the students’ intention will be to get the task done with minimum effort in order to meet the course requirements, i.e. by routinely only memorising facts and procedures (rote learning). On the other side of the continuum is the deep approach to learning, meaning that a student is actively engaged in the search for underlying meanings, i.e. by relating ideas to previous knowledge and experience. Deep learning is a way of learning aimed at understanding the meaning behind (legal) texts, critically examining new facts and ideas, tying them into existing cognitive structures and discovering links between ideas. A deep learning approach is of key importance for the engagement of students with their subject material, and results in an improved quality of learning outcomes.9xPostareff, Parpala & Lindblom-Ylänne 2015, p. 316 with references. There are various encouraging and discouraging factors that can stimulate the adoption of deep approaches to learning, which may be situated in the context of a learning environment, in students’ perceptions of that context, and in individual characteristics of the students themselves (e.g. study skills, level of interest etc.).

      2.2. Blended Learning, Student’ Preparation and Face-to-Face Education

      The use of information (and communication) technology (I(C)T), combined with (various types of) in-class learning activities can support the student’s (higher levels of) learning.10xMcCray 2000. Furthermore, according to Yildirim 2017, p. 86, blended learning offers ‘various educational options to learners, minimizes the inequality of opportunity, provides individualized solutions pertinent to learning differences and eliminates hindrances related to space and time.’ One way in which blended learning has the potential to do so is when it is implemented such that students get the opportunity to prepare themselves for class by being enrolled in an online learning environment. These online learning environments provide students with the opportunity to, independent of time, place or pace, prepare themselves for class in a learning setting that was designed such that it optimizes learning. Other than paper-based materials, a learning environment that uses IT can implement a number of design principles that have been shown to facilitate learning: content matter can be presented in various forms (e.g., text, video, audio),11xMayer & Moreno 1998. hypertext make it easy to navigate through the information,12xJacobson & Spiro 1995. (and immediate feedback can be added to formative assessment.13xDihoff et al. 2004; Epstein et al. 2002 Students’ preparation by using an IT based online learning environment, i.e. a blended learning environment, could support (deep) learning. In literature, however, success of e-learning results are often considered from an institutional or technological point of view, or are based on the question whether e-learning initiatives are continued or not. According to us, this should not be the decisive criterion. Our point of view is, as already mentioned, that e-learning initiatives should aim to improve the quality of teaching and learning.
      One could also say that face-to-face education can become more focused on deep learning when using an online learning environment that encourages students to prepare themselves for class. A recent study on flipping the classroom shows in average a small positive effect on learning outcomes. Van Alten and others call flipping the classroom a promising pedagogical approach when appropriately designed.14xVan Asten et al. 2019. In this article we will describe our findings as to the question whether such positive effect has been found in our situation in which we used Scalable Learning, which is a way of flipping the classroom.15xSee on this topic, e.g., Brame 2013; - It is not the first course in the law curriculum at Utrecht University in which this way of flipping the classroom has been used, a flipping the classroom concept was used already in the first course of the law curriculum (‘Foundations of Law’).

      2.3. The Role of the Teacher

      The form of blended learning just described provides the possibility to improve face-to-face classroom interaction among students and between students and teachers. The latter interaction is very important because it is one of the factors that encourage or discourage the adoption of deep(er) approaches to learning – i.e. the approach students take to the learning materials is influenced by the role the teacher takes upon him/her.16xSee, e.g., Campbell et al. 2001. If teachers practise an approach that is more student oriented and focus more on changing their concepts, and are more involved, students are more inclined to go into deep approaches to learning.17xBaeten et al. 2010. If teachers instead focus (only) on transmitting knowledge, students are less inclined to go into deep approaches to learning. This fits into the two ways of teaching distinguished by Trigwell and others: one that focuses on transmitting knowledge and one that focuses on students and on achieving a change in their conceptions. The first way of teaching more likely leads to a surface approach, the second way of teaching to a deep approach.18xTrigwell, Prosser & Waterhouse 1999. Because the approach students adopt is not a personality trait, but is also related to their perception of the task to be accomplished,19xSee already Marton & Säljo 1976. teachers’ conceptions of teaching and their beliefs as to the purpose of legal education will have consequences for their teaching approach and on the perceptions, students have of their tasks.20xChesterman 2016, p. 77. In our study we will also consider the effect of teaching approaches on the adoption of (surface and/or deep) approaches to learning and see whether approaches focussed on information transmission rather lead to a surface approach (and lower quality of learning outcomes), and an approach focussed on changing conceptions rather lead to a deep approach to learning (and higher quality of learning outcomes).

    • 3. A New Online Learning Environment: Scalable Learning

      3.1. Introduction: Background of the Teaching Environment

      The course ‘Introduction to Private law - Property Law’ is a first-year course in the law curriculum. It is the second course of their curriculum in private law. Approximately 700 students take part in this course every year. The course lasts ten weeks, including an exam week. Each education week consists of a lecture of two hours and two small tutorials of two hours each. The lectures take place in large groups (approx. 700 students), tutorials in smaller groups (approx. 25 students). Lectures are used for transferring knowledge, although interactive elements are nowadays increasingly incorporated in lectures. In smaller groups active learning, and active participation of students, is crucial. Tutorials are given by various teachers, who all have their own style and methods. Students prepare by studying the study materials (literature and case law) on the most important property law doctrines and the pertaining conceptual framework, i.e. doctrines such as possession, ownership, transfer of ownership, prescription, etc. They also have to prepare the assignments carefully, elaborating their solutions in writing. Self-study assignments at knowledge level as part of an e-learning environment have to be completed by students, who have to respond to questions to which they received (automatically generated, pre-programmed) online feedback. Students have both an intermediate and a final exam, consisting of open-ended questions, solutions to (hypothetical) cases, and discussion of (theoretical) propositions.
      The Scalable Learning environment21xIn the academic year 2018-2019 a project, financed by the Utrecht Education Incentive Fund (Faculty LEG), made it possible to create interactive materials (interactive knowledge clips in Scalable Learning) and to experiment with blended learning. consisted of nine knowledge clips aimed to impart basic knowledge in an appealing way to students, at their own pace. The clips lasted between 2,5 and 7 minutes. The environment is intended to activate ‘prior knowledge’. We have added basic questions to the knowledge clips on some of the most important topics of the course to allow students to test whether they have understood the material and to alert them to important concepts. Different types of questions were used in the e-learning environment (as part of the knowledge clips) in order to contribute to a more varied range of forms of and a more challenging education. The topics were as follows: the system of property law, possession, looking up important manuals in the digital library, delivery of movable property, causal system, commingling and specification, accession and specification, accessoriness and droit de suite and Roman right of retention. During the tutorials, assignments are used to practice applying the acquired knowledge and to discuss more difficult matters. The focus is on the skills to solve cases, analyse judgments and to analyse and apply legislative provisions. Then, during tutorials the teacher could try to make students gain a deeper insight through in-depth questions. In addition, we used Learning Analytics, i.e. the ‘measurement, collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs.’ Learning Analytics in Scalable Learning made it possible for teachers to see students’ weekly preparation completion, to see lecture and quiz completion percentages, to monitor when students pause a clip, when they get confused, when and what questions they have and ultimately, when they return to an earlier moment in the clip. Learning Analytics made it possible for teachers to register data and/or scores, view them prior to teaching, and incorporate results into the teaching material, allowing to address issues that students perceive as difficult. In the module review, the teacher could view the answers to the questions and use these in class review. We used Learning Analytics for two purposes: 1. To track students’ activity (as a minimal preparation for in-class education); 2. As a starting point for our in-class discussion.

    • 4. Methods

      4.1. Starting Point: (Previous) Evaluations

      After the course a (formalised) discussion between a group of students who took the course, organised, monitored and chaired by student members of the Education Committee, and the course coordinator, took place that showed that our knowledge clips were generally well received. As the clips were made by students, and they were left with some freedom as to how to give form to the knowledge clips, its quality somewhat varied. The official evaluation confirmed this picture. Student satisfaction is however not sufficient for the conclusion that a learning environment contributed to student’ learning. In this respect, another use of Scalable Learning at another Department of the LEG Faculty, namely Governance, at Utrecht University, has been evaluated by a focus group, an interview and a questionnaire at midterm and at the end of the course (N = 78 resp. 53). In this course too, students were generally positive about the knowledge clips. They said it helped them to acquire a better understanding of the material. According to teachers, students had a better basic knowledge when entering the classroom, and so were better prepared. Teachers said that more in-depth questions were asked during their lessons. The question remains whether the use of Scalable Learning automatically leads to a deeper level of learning or not, and what (crucial) role teachers play (the teacher might be a mediating variable).

      4.2. Research Question and Expected Outcomes

      The remainder of this article will present the outcomes of a quantitative study on the (possible) change in surface vs. deep learning of law students in their first-year ‘Introduction to Private Law’ course as a result of the introduction of the new blended learning environment.22xIn conducting this study the approach of Bishop-Clark and Dietz-Uhler 2012 has been followed. This study has a similar but slightly different structure, in which a different course has been studied, by Van Dongen & Meijerman 2019. The purpose of this study is to measure the learning effect of a blended course design, which focusses on acquiring basic knowledge and keeping the learning continuum of students, and of teaching approaches during face-to-face meetings, on the preparation, learning approaches and learning outcomes of first-year law students. The research questions were as follows: What are the effects of the new (blended) course design on the preparation, the learning approaches and the learning outcomes of first-year law students? What effect does the teachers’ approach on teaching have on students’ learning?
      We expected that students would be more involved and better prepared using the online learning environment, considering the modern and digital way it was presented and the semi-obligatory nature of using it, and that it would indirectly lead to deeper learning as more time could be devoted to promoting such approach during class by the teacher, and also to improved learning outcomes. Therefore, we expected the approach taken by the teachers to be of importance. It has to be noted that no explicit assignment was given to the teachers as to which approach to teaching they had to take (although, of course, more experienced teachers are (often) familiar with different kind of teaching, and of the difference between surface and deep approaches to learning).

      4.3. Data Collection

      Before the actual research underlying this article was conducted, approval for the research design was obtained from the Faculty’s Ethical Review Committee of LEG.23xThis (optional) review by the Faculty’s Ethical Review Committee has been conducted in order to safeguard the ethical quality of the research. The Ethics Committee if LEG aims to stimulate and facilitate ethical conduct by the faculty with regard to the rights, safety and well-being of the participants in scientific research, i.e. of the students in ours study. As a result of their suggestions, slight adjustments in the text of the questionnaire, and the introductory text accompanying the questionnaire were made. Concerning the preparation data was collected in week six of the course. The preparation of students as to basic knowledge has been measured by looking at the completion of the Scalable Learning environment (i.e., did they not complete, partially complete or totally complete the learning environment). For three reasons data was only collected in week six. First, uncertainties about the use of this digital environment are expected to be balanced by that time. Second, the topic of the question (i.e., question 4A) that students had to answer during the exam corresponded with the topic presented during week six, so it seems best fit for comparing preparation with actual results. Third, as Learning Analytics had to be entered manually for each student of each tutorial, it was impossible to measure more weeks. In the recording, the entire student cohort was noted except for students who took this course before but had failed the exams, i.e. approx. 600 students. The primarily aim was to check the correctness of the premise, namely whether there is more basic knowledge, and the secondary aim was to find out whether there is a correlation with the mark on question 4A of the exam.
      Concerning the learning approaches two questionnaires on learning approaches were set out. During the first week of the Introduction of Private Law the approx. 600 students filled in their first questionnaires during the tutorials, and during the last week of the course, the second questionnaires were filled in during the tutorials in 26 student groups. The questionnaires were filled in prior to the start of the course, and again during the last tutorial, i.e. the second-last teaching moment - after all, experience shows that students often skip the last teaching moment. For the questionnaires, two different versions of the so-called Study Process Questionnaire (R-SPQ-2F).24xBiggs 1987a; Biggs, Kember & Leung 2001, p. 133 et seqq. See also the Dutch version, received from the Centre of Expertise for Higher Education, University of Antwerp, see Stes, De Maeyer & Van Petegem 2013. The questionnaire was adapted and made applicable for Introduction to Private Law by aligning it to the content accordingly. The questionnaires contained 20 questions on students perception of their study process that could be answered using a 5-point Likert scale (1= totally not agree; 5= totally agree), i.e. measuring their deep and surface approaches to learning (each with a motive and strategy subscale). Three extra questions on the use of knowledge clips, digital environments and/or Scalable Learning were added. With these questionnaires we intend to obtain a more profound understanding of how students learn (with the method of learning at the start of the course as starting point), and whether this changes during the course Introduction to Private Law.
      Concerning the teachers approach to learning between the final week and one week after the course the 10 teachers involved handed in their teachers’ questionnaires. These questionnaires were meant to gain insight into the activities and the role workgroup teachers take on. This questionnaire was a modified version of the Approaches to Teaching Inventory (ATI),25xStes, De Maeyer & Van Petegem 2008. consisting of 22 questions with a 5-point Likert scale (ranging from ‘this item was only rarely/never true of me’ through to ‘(almost) always true of me’). Examples of questions asked are: ‘During the seminars I thought it was important to present as much as possible factual knowledge to students so that they know what they have to learn for the course Introduction to Private Law’, ‘My aim was to help students develop new insights.’ The ATI contains two scales, representing the two (fundamentally different) approaches to teaching, namely information transmission/teacher-focused approach, and conceptual change/student-focused approach (see para. 4.4). The two scales contain two sub-scales: intention and strategy sub-scales.26xSee Trigwell, Prosser & Waterhouse 1999, p. 62. See also Prosser & Trigwell 2006. Two additional questions were added on the use of Scalable Learning and its Learning Analytics by teachers for their teaching.27xThe Dutch questionnaires mentioned in the previous footnotes as well as the questionnaires made in the context of a previous study (Van Dongen & Meijerman 2019) were the basis for the current questionnaires. The questionnaires were compared with the original English version, adapted to the specific field Introduction to Private Law and supplemented with a few questions. Some colleagues proofread and then we finalised the questionnaires.
      Concerning the learning outcomes the results of the final exam were collected and the results of question 4A were collected separately since it tested a higher level of learning (make an analysis of a statement about property law)with regard to the subject discussed in the Scalable Learning environment in the sixth week.
      After collecting all this information, the results from the ATI were collected and entered into an Excel sheet. The modified version of R-SPQ-2F was edited by the Test and Evaluation Service of Education and Learning, FSW, who also imported the results in an Excel sheet. The exam results were inputted in Excel, checked, corrected and supplemented where needed. At the end, all Excel files were merged into one and subsequently imported in SPSS.

    • 5. Results

      5.1. Validation and General Results

      In the pre-course survey there were 502 responses, and in the post-course survey 452 responses. 612 students (of the 696 enrolled in the course) participated in the final exam. The scale reliability, in other words, the homogeneity of the items of the two questionnaires, was calculated by means of Cronbach’s Alpha (α). Cronbach’s α is a measure of internal consistency, i.e. how closely a set of items is related as a group. The Cronbach’s α for the 10-item part on the deep approach of the Revised version of the Study Process Questionnaire was .710.28xThe scale is from 0 to 1, from totally not to perfectly homogeneous. The Cronbach’s α for the 10-item part on the surface approach of the same questionnaire was .745.29xThese measurements were taken from the pre-course surveys. The Cronbach’s α of the post-course surveys was .777 (deep approach) and .767 (surface approach). The Cronbach’s α for the 22 items of the Approaches to Teaching Inventory had to be measured for the two distinct constructs: the test of one of them, information transmission/teacher-focused approach to teaching (ITTF), received a values of α of .888, while the conceptual change/student-focused approach to teaching (CCSF), received a value of α of .529. These Cronbach’s α values indicate that both the Study Process Questionnaire and the Approaches to Teaching Inventory was as far as it concerns the constructs ITTF, are reliable and are valid to use in this context. The CCSF is Cronbach’s alpha is insufficiently reliable, and therefore, conclusions about the influence and/or role of the latter must be taken with caution.
      We asked students to compare the usefulness of the knowledge clips in the Scalable Learning environment with knowledge slips used in previous courses in the first-year of the curriculum. They had to answer these questions on a 5-point Likert scale (ranging from ‘this item was only rarely/never true of me’ through to ‘(almost) always true of me’). Students believed that (in comparison) our knowledge clips helped them less in their preparation for the face-to-face meetings and the exams (mean difference between our course and previous courses (M) = -1.15, standard deviation (SD) = 1.49, n = 381). Furthermore, in average they indicate a similar contribution to their comprehension of learning material, in comparison with previous courses (mean difference (M) = -0.053, SD = 1.39, n = 385). Finally, compared to previous courses in which online environments were used, in average their motivation to study the material slightly decreased (mean difference (M) = -0.133, SD = 1.32, n = 385).

      5.2 Changes in Students’ Learning Approaches During the Course

      Based on the pre-course and post-course questionnaires, high scores on one approach to learning (deep or surface) are moderately negatively correlated with low scores on the other approach to learning (deep or surface). A numerical summary of the strength and direction of the relationship between two variables has been calculated by means of the Pearson correlation coefficient (r).30xNumber 1 means a perfect correlation, 0 means no correlation at all. The sign in front of the number (- or +) indicates whether there is a negative correlation (if one variable increases, the other decreases) or a positive correlation (if one variable goes up, so does the other). As to the (statistically) significant correlation, the deep approaches of students at the pre- and post-course measurement moments were positively correlated, Pearson’s r (370) = .545, p < .001; while the surface approach at the beginning and at the end of the course were (even more) positively correlated, Pearson’s r (369) = .637, p < .001. A positive relationship corresponds to an increasing relationship between the two variables. This shows that the deep and surface approaches are rather steady. Furthermore, between the pre-course deep approach and the pre-course surface approach exist a medium negative correlation, Pearson’s r (474)=-.350, p < .001;the same applies to the post-course deep and surface approaches, Pearson’s r (436)= -.411, p < .001. Keeping in mind that there was a considerable variance of possible approaches, a negative relationship corresponds to a decreasing relationship between the two variables.31xThe correlations were significant at the 0.001 level (2-tailed); - In order to test the hypothesis that students who did (partly or fully) use and the students who did not use Scalable learning were associated with statistically different exam results, degree of self-regulated learning and (differences in) deep and surface approaches, an equal random sample of the first group was taken and compared to the second group by means of an independent samples t-test. Additionally, the assumption of homogeneity of variances was tested via Levene’s test. With regard to self-regulated learning (both at the beginning of the course, at the end of the course as well as the difference) equal variances can be assumed but no significant differences existed. As to the exam result on question 4A no equal variance could be assumed, and no significant differences existed between the two groups. Also with regard to differences between pre- and post-course measurements of deep and surface approaches no equal variance could be assumed, but no significant differences between two groups existed.
      How, on average, did the students gain in their approach during the course? Compared to the pre-course surveys, the post-course surveys did not show a significant increase or decrease of both surface and deep approaches to learning.32xAlthough factor analysis showed five factors explaining more or less 52% of the variance, for this study we have chosen to stick to Biggs’s division of two factors. Starting from a fixed number of two in the factor analysis (Kaiser-Meyer-Olkin test), questions are arranged quite well, in accordance with the questions arranged by Biggs under the two approaches to learning. Nevertheless only 33% of the variance can be explained by the distinction between deep and surface approaches to learning (KMO of .836 showed that the size of number was very satisfying for the factor analysis). Apparently there are a lot of other factors present. The post-course approaches to learning were compared with the pre-course situation approaches to learning by using a one-sample paired t-test (i.e. a statistical method used to compare the mean difference between two sets of observations). There was no significant difference in the scores for the deep approach at the beginning (M = 3,12, SD = 0,50) and the deep approach at the end of the course (M = 3,10, SD = 0,54) conditions; t(369)=.910, p = .363). Neither was there a significant difference in the scores for the surface approach at the beginning (M = 2,50, SD = 0,59) and the surface approach at the end of the course (M = 2,50, SD = 0,59) conditions; t(368) = .186, p = .852. This outcome is remarkable, when comparing it with the previous study in which the deep approach results decreased.33xSee Van Dongen & Meijerman 2019.

      5.3 Differences in Groups and the Teacher

      No significant difference occurred as to the degree in which the Scalable Learning environment was or was not used by the various groups of students. As described in the last section, both deep and surface approaches remained at the same level throughout the course. However, when comparing individual teachers and looking into the differences of in- or decrease of deep approaches to learning, apparently two teachers acquired remarkably better results (teacher 2 and 7) in comparison with the other teachers (see table 1).

      Tabel 1 Difference between post- and pre- measurements in deep and surface approach (subdivided per teacher); amounts per group (n), mean (M) with standard deviation (SD).
      n M SD
      DA21 1 56 -0.0509 0.48929
      2 50 0.2120 0.42852
      3 26 0.0019 0.38613
      4 42 -0.1298 0.58862
      5 37 -0.0757 0.36999
      6 47 -0.0489 0.58491
      7 14 0.3143 0.51119
      8 25 0.0920 0.51065
      9 46 -0.1978 0.41925
      10 27 -0.1315 0.46722
      SA21 1 51 0.0000 0.47791
      2 51 -0.1118 0.52034
      3 30 -0.1200 0.40612
      4 42 -0.0310 0.55281
      5 36 0.0278 0.41601
      6 48 -0.0208 0.49667
      7 14 -0.3643 0.72495
      8 25 0.0400 0.51962
      9 46 0.1652 0.47992
      10 26 0.2038 0.40815

      With regard to the decrease of surface approach, also teacher 7 and in lesser degree teachers 2 and 3 stand out above the rest. It must be noted that the group of respondents (remark: not the group size of students) is by far the smallest for teacher number 7. This might have an influence on the outcome, although this cannot be ascertained. We also found that face-to-face teaching can make quite a difference. Based on the differences in deep and surface approaches during the course, the group was divided into three (unequal) groups. These groups were made based on the differences between post- and pre-measurements in deep and surface approach, for the ‘high achieving’ group of teachers: highest on difference in deep, lowest in change in surface; the ‘low achieving’ groups has the opposite characteristics; the middle group has results that fell in between.
      As we were interested in the general difference in increase and/or decrease in deep/surface approaches, an analysis of variance (ANOVA) was used, i.e. analysis where the means between groups is calculated and it is determined whether any of those means are statistically significantly different from each other. The ‘high achieving’ teachers taught groups in which the mean approach of students at the beginning of the course was lower (M = 2.97, SD = .426) compared to the other groups (M = 3.10, SD = .501 resp. M = 3.20, SD = .483). Of course, in such group an increase is more probable and to be expected.34xNo significant difference in exam marks could be noted. A significant difference in the results in information transmission/teacher-focused approach to teaching (ITTF) and conceptual change/student-focused approach to teaching (CCSF)35xA negative correlation was found between both approaches to teaching, Pearson’s r (639) = -.136, p = .001. exists between the three groups of teachers: strangely the highest result in information transmission/teacher-focused approach was found in the worst group, the second highest in the best group and the worst in the medium group of teachers. The groups of ‘high achieving’ and ‘low achieving’ teachers have a mean score though that is very close to each other (M = 3.24, SD = .111 vs. M = 3.14 SD = .668). In order to better understand what teachers did, and not only to base conclusion on the perception of teachers, a study of their actual behaviour is needed. Unfortunately, this was not possible in this study, but would be a fruitful addition for further studies.

      5.4 Predictors of Deep Learning Approaches

      Our first part of the research question was what the effects of the new (blended) course design are on the preparation, the learning approaches and the learning outcomes of first-year law students. One conclusion we can draw based on our study is that only watching and/or completing the Scalable Learning environment in itself did not have any effect on the surface or deep approaches to learning (nor on the exam mark). Our next question concerned the effect of teaching approaches by teachers on the approaches to learning and/or the exam results: how are teacher approaches (information transmission teacher-focused (ITTF) or conceptual change student-focused (CCSF)) related to their deep/surface approaches (possibly in combination with the degree of preparation, based on their efforts in the Scalable Learning environment)? Unfortunately, in their answers to the questionnaire some questions were left unanswered by some teachers. Therefore, we replaced the missing values with the series’ mean values.
      The student’ approach to learning at the end of the course is the result of the pre-course level of deep learning and the influence of the teacher.36xDA2 = c + β1 x DA1 + β2 x CCFA + β3 x ITTF. In this regression model C is the constant, β1, β2 and β3 are the regression coefficients. The influence of ITTF appeared not significant. Therefore, the regression analyses have been recalculated and the data without the (not significant) influence of ITTF are reported. The linear regression was calculated to predict the deep approach of students at the end of the course based on both the level of deep learning at the start of the course and the teaching approaches taken by teachers. A significant regression equation was found (F(2,367) = 81.734, p < .000) with an R² of .308). The predicted deep approach at the end of the course (DA2) is equal to 1.867 + (0.596 x DA1) – (0.175 x CCSF) . It appeared that the pre-course level of the deep approach was very dominating.37xThis is in line with Van Dongen & Meijerman 2019, p. 562. The linear regression was calculated to predict the changes in deep approach of students that occurred during the course based on both the level of deep learning at the start of the course and the teaching approaches taken by teachers, A significant regression equation was found (F(2,367) = 42.940, p < .000) with an R² of .190). The predicted deep approach at the end of the course (DA2) is equal to 1867 – (0.404 x DA1) – (0.175 x CCSF).38xOnly 3,6% of the variance in change of deep approaches can be explained by looking only to teachers’ approaches to teaching. The linear regression was calculated to predict the changes in deep approach of students during the course only based on the teaching approaches taken by teachers. A significant regression equation was found (F(2,367) = 6.889, p = .001) with an R² of .036). The predicted deep approach at the end of the course (DA2) is equal to 1.185 – (0.092 x ITTF) – (0.252 x CSSF). Interestingly, ITTF is significant here. Only 3,6% of the variance in change of deep approaches can be explained by looking only to teachers’ approaches to teaching. The linear regression was calculated to predict the changes in deep approach of students during the course only based on the teaching approaches taken by teachers. A significant regression equation was found (F(2,367) = 6.889, p = .001) with an R² of .036). The predicted deep approach at the end of the course (DA2) is equal to 1.185 – (0.092 x ITTF) – (0.252 x CSSF). Interestingly, ITTF is significant here.39xAs the CCSF measure scored poor on its reliability not a lot of value can be given to the final part of this formula.40xAnother strange result is the very weak but significant negative correlation existed between CSSF and SRL at the end of the course; Pearson’s r (450)= -0.104, p = .027.
      The linear regression was calculated to predict the surface approach of students at the end of the course based on both the level of surface learning at the start of the course and the teaching approaches taken by teachers, A significant regression equation was found (F(2,366) = 27.058, p < .000) with an R² of .418). The predicted surface approach at the end of the course (SA2) is equal to 0.148 + (0.649 x SA1) + (0.202 x CCSF). The linear regression was calculated to predict the changes in surface approach of students during the course based on both the level of surface learning at the start of the course and the teaching approaches taken by teachers. A significant regression equation was found (F(2,366) = 43.313, p < .000) with an R² of .191). The predicted surface approach at the end of the course (SA21) is equal to 0.148 – (0.351 x SA1) + (0.202 x CCSF). The linear regression was calculated to predict the changes in surface approach of students during the course only based on the teaching approaches taken by teachers. The level of ITTFA was not significant and therefore deleted from the model. Only 2,4% of the variance in change of surface approaches can be explained by looking only to concept changing approach of teachers. A significant regression equation was found (F(1,367) = 8.922, p = .003) with an R² of .024). The predicted deep approach at the end of the course (SA21) is equal to -.868 + (0.241 x CSSF).
      There was no significant correlation between the efforts in Scalable Learning and deep approach to learning at the end of the course. In their response to the question whether teachers used Learning analytics from Scalable learning for the construction/composition of their lessons, in average students gave a quite neutral answer (M = 3.0). In their response to the proposition that they tried to use Learning analytics from Scalable learning to connect their teaching with the questions of the students, teachers were also neutral but a little more positive (M = 3.3). In the final section it will argued that this, unfortunately, is a missed opportunity and some ideas for improvement will be presented.

      5.5. Predictors of Exam Results41xWhen ‘exam grades’ are mentioned here, both the final grade as well as their performance on the specific question about week 6 of the learning environment are aimed at. As there were no significant differences, no distinction is made in our discussion of the results here.

      Watching knowledge clips did not have a significant correlation with higher grades. Furthermore, no statistically relevant correlation (i.e. a mutual relationship) could be established between the exam results and the level of students’ surface or deep approaches to learning at the end of the course. Neither is there a significant correlation between self-regulated learning at the beginning or the end of the course and exam results. However, a significant (but only slight) negative correlation was found between the decrease or increase of surface approach over the course, and the decrease or increase of deep approach over the course. Because of this very weak significant negative correlation between the change in surface approach to learning during the course and the exam results, this points to the direction we hoped for: an increase in surface approach during the course is related to a lower mark at the exam, and a decrease of surface approach during the course to a higher mark. Looking at the exam results, the higher mean mark values of one teacher (‘teacher 7’) were reconfirmed. Although CSSF and DA21 a negatively correlated, Pearson’s r (370)= -0.156, p = .003 (-0.156) and CCSF and SA21 are positively correlated, Pearson’s r (369)=.154, p = .003,42xBoth correlations are significant at the 0.01 level (2-tailed). no statistically significant correlation between teachers’ approaches and the exam results emerged. The absence of a statistically significant correlation between the teachers’ approaches to teaching and the exam results surprised, as our previous study indicated a positive correlation between the conceptual change student focused approach and exam results.43xVan Dongen & Meijerman 2019, p. 562-563. An ANOVA test showed no significant difference in exam results between the groups or between teachers. Teachers’ approaches were not found to be a significant model to predict deep approaches to learning at the end of the course and the same applies to teachers’ approach as a predictor for the exam results. Of course we have to keep in mind that these conclusions are based on the perception teachers have of their own way of teaching.

    • 6. Conclusions

      The purpose of this study was to measure the effect of a new, flipped, course design with integrated blended learning on learning approaches and learning outcomes of first-year law students in the area of private law (property law). Our point of view is that e-learning initiatives should aim at quality improvement of the teaching and learning experience. Therefore, the research question where we started with was: What are the effects of my new (blended) course design on the preparation, the learning approaches and the learning outcomes of first-year law students? What effect does the teachers’ approach on teaching have on students’ learning? We expected that students would be more involved and better prepared using the online learning environment, considering the modern and digital way it was presented and the semi-obligatory nature of using it (we did not expect a difference between honours and non-honours students), and that it would indirectly lead to higher/deeper learning as more time could be devoted to promoting such approach during class, and also to improved learning outcomes. Therefore, we expected the approach taken by the teachers to be of importance. In coming back to these issues, and answering the questions, three final observations have to be made for further improvement on the following issues: 1. Digital environment (course design) and students’ approaches to learning; 2. Connection between online- and offline activities (role of the teacher); 3. Alignment between exam and learning activities:

      1. Digital environment. The added value of our environment for a better understanding of concept and a better preparation to class, was expected to be similar to that of other digital environments used at an earlier stage of the curriculum. It is remarkable that no significant increase or decrease regarding both surface and deep approaches to learning was measured. Furthermore, no significant difference as to the degree in which the Scalable Learning environment was or was not used by various groups of students was established. Clips and questions were made by students (under guidance of a teacher), allowing to take a next quality improvement step of our online learning environments and of the way these foster deep learning. Generally, according to literature, interaction and active learner engagement are important. In online environments learners require quality feedback to help them understand topics at a deeper level. Common practice in online learning environments includes reflective practice, learning-by-doing, active discussions and decision making.44xCzerkawski 2014, p. 32, 35. Czerkawski argues that in order to foster deeper learning strong support systems, effective pedagogical methods and online community building activities are necessary. Furthermore, in online learning environments creative and meta-cognitive activities should be strongly emphasised.45xDu, Yu & Olinzock 2011, p. 37. In our opinion, further reflection as to how these elements could be integrated in the course is needed.

      2. Connection between online and offline activities (role of the teacher). When comparing individual teachers and studying the differences of in- or decrease between deep and surface approaches to learning, some teachers seem to acquire a remarkably better result in comparison with others. It so seems that face-to-face teaching makes a significant difference. As the mere supervision and/or completion of the Scalable Learning environment did not show any effect on the surface or deep approaches to learning, the added value for the increase of deep approaches to learning might be found in the feedback on offline activities during online activities (‘bridge the gap’) and possibly in the handout of assignments within the digital environment. Teachers could obtain (even) better profits from the value information they receive from Learning analytics. We have also found that teaching approaches (combined with students’ initial approach to learning) may explain part of the final approach to learning (although the initial deep approach results were quite dominant for the end level of deep approach).

      3. Alignment between exam and learning activities. In a previous Dutch study, it was stated that knowledge clips had a significant correlation with higher grades.46xSee Steenman 2016. On the contrary, the outcome of the present study points at a different direction. This could be interpreted in the sense that blended learning has no value. We believe quite the opposite. If blended learning, in this case Scalable Learning, leads to a decrease of teachers’ time spent in class for the transfer of basic knowledge, time could be used more effectively (namely for more in-depth questions and/or more difficult cases). This choice therefore is efficient. Another aspect is the statement that assessment drives learning. At first sight it does not seem a good sign that regardless of the approach, taken it does not make a difference for the grade, i.e. for the degree in which the learning outcomes are fulfilled. Why actively engaging in an online environment if not related in any way to assessments and/or if there is no need for it or even of any use to the final assessment? Ideally, the information given online is needed for a fruitful development of deeper learning, necessary prior to final assessment. However, as the learning goals of the course under review mainly concern the lower orders of thinking (like recall of knowledge and application of knowledge), both approaches might be adequate to achieving the desired outcome. One remark made by a student in the margin of our questionnaire hits the spot: ‘the exams did not reach the scientific level of the in-depth articles that we have to read, and that is unfortunate. This does not motivate understanding and deepening of the materials, but motivate learning by rote [our translation]’. Nevertheless, if higher learning outcomes would be achievable, which we believe is possible in subsequent study years, deep approach to learning should be striven for. Therefore rote learning should be avoided, and exams should also aim at deeper levels of learning.

    • References
    • Alexander, S. (2001). E-learning Developments and Experiences. Education + training, 43(4/5), 240-248.

    • Van Asten, D.C.D., et al. (2019). Effects of Flipping the Classroom on Learning Outcomes and Satisfaction: A Meta-Analysis. Educational Research Review, 28, 1-18.

    • Baeten, M. et al. (2010). Using Student-centred Learning Environments to Stimulate Deep Approaches to Learning: Factors Encouraging or Discouraging their Effectiveness. Educational Research Review, 5(3), 243-260.

    • Biggs, J.B. (1987a). The Study Process Questionnaire (SPQ): User’s Manual, Melbourne: Australian Council for Educational Research.

    • Biggs, J.B. (1987b). Student Approaches to Learning and Studying. Hawthorn, Victoria: Australian Council for Educational Research.

    • Biggs, J., & Tang, C. (2011). Teaching for Quality Learning at University. What the Student Does (4th edn.), Maidenhead: Open University Press/McGraw Hill 2011.

    • Biggs, J., Kember D., & Leung, D.Y. (2001). The Revised Two-Factor Study Process Questionnaire: R-SPQ-2F. British Journal of Educational Psychology, 71, 133–149.

    • Bishop-Clark, C., & Dietz-Uhler, B. (2012). Engaging in the Scholarship of Teaching and Learning. Sterling, Virginia: Stylus Publishing.

    • Brame, C.J. (2013). Flipping the classroom. Center for Teaching and Learning, Vanderbilt University. Retrieved from http://cft.vanderbilt.edu/guides-sub-pages/flipping-the-classroom/ (last accessed on 25 July 2019).

    • Campbell, J., et al. (2001). Students’ Perceptions of Teaching and Learning: the Influence of Students’ Approaches to Learning and Teachers’ Approaches to Teaching. Teachers and Teaching: Theory and Practice, 7(2), 173-187.

    • Chesterman, S. (2016). Chapter 5. Doctrine, Perspectives, and Skills for Global practice. In C. Gane & R. Hui Huang (Eds.). Legal education in the Global Context. Opportunities and Challenges (pp. 77-85), London-New York: Routledge.

    • Czerkawski, B.C. (2014). Designing Deeper Learning Experiences for Online Instruction. Journal of Interactive Online Learning, 13(2), 29-40.

    • Dietz-Uhler, B. & Hurn, J.E. (2013). Using Learning Analytics to Predict (and Improve) Student Success: A Faculty Perspective. Journal of Interactive Online Learning, 12(1), 17-26.

    • Dihoff, R.E., et al. (2004). Provision of feedback during preparation for academic testing: Learning is enhanced by immediate but not delayed feedback. The Psychological Record, 54(2), 207-231.

    • Dyckhoff, A.L., et al. (2012). Design and Implementation of a Learning Analytics Toolkit for Teachers. Educational Technology & Society, 15(3), 58-76.

    • Van Dongen, E.G.D., & Meijerman, I. (2019). Teaching a Historical Context in a First-Year ‘Introduction to Private Law’ Course. The Effects of Teaching Approaches and a Learning Environment on Students’ Learning. In V. Amorosi & V.M. Minale (Eds.), History of Law and Other Humanities: Views of the Legal World Across the Time (pp. 551-569), Madrid: Universidad Carlos III.

    • Du, J., Yu, C., & Olinzock, A.A. (2011). Enhancing collaborative learning: Impact of Question Prompts design for online discussion. Delta Pi Epsilon Journal, 53(1), 28-41.

    • Epstein, M.L., et al. (2002). Immediate feedback assessment technique promotes learning and corrects inaccurate first responses. The Psychological Record, 52(2), 187-201.

    • Jacobson, M. J., & Spiro, R. J. (1995). Hypertext learning environments, cognitive flexibility, and the transfer of complex knowledge: An empirical investigation. Journal of educational computing research, 12(4), 301-333.

    • De Jong, B., & Heres, L. (2018). Evaluatierapport verrijkte kennisclips USBO, Utrecht: UU/USBO.

    • Long, P., & Siemens, G. (2011). Penetrating the Fog: Analytics in Learning and Education. EDUCAUSE Review, 31-40.

    • Marton, F., & Säljo, R. (1976). On Qualitative Differences in Learning: I – Outcome and process. British Journal of Educational Psychology, 46, 4-11.

    • Mayer, R. E., & Moreno, R. (1998). A cognitive theory of multimedia learning: Implications for design principles. Journal of Educational Psychology, 91(2), 358-368.

    • McCray, G.E. (2000). The Hybrid Course: Merging On-line Instruction and the Traditional Classroom. Information Technology and Management, 1(4), 307-327.

    • McGill, T.J., Klobas, J.E. & Renzi, S. (2014). Critical Success Factors for the Continuation of E-learning Initiatives. Internet and Higher Education, 22, 24-36.

    • Oliver, M., & Trigwell, K. (2005). Can ‘Blended Learning’ Be Redeemed?’, E-learning 2(1), p. 17-26.

    • Postareff, L., Parpala, A. & Lindblom-Ylänne, S. (2015). Factors Contributing to Changes in a Deep Approach to Learning in Different Learning Environments. Learning Environments Research, 18(3), 315–333.

    • Prosser, M., & Trigwell, K. (2006). Confirmatory factor analysis of the approaches to teaching inventory. British Journal of Educational Psychology, 76, 405-419.

    • Schutgens, R. (2019). Blended learning: mixed feelings. Enkele kanttekeningen bij de digitalisering van het rechtenonderwijs. Ars Aequi, (3), 237-240.

    • Staker, H., & Horn, M.B. (2012). Classifying K–12 Blended Learning, San Mateo: Innosight Institute. Retrieved from https://files.eric.ed.gov/fulltext/ED535180.pdf (last accessed on 25 July 2019).

    • Steenman, S. (2016). Evaluatie gebruik kennisclips. Staats- en Bestuursrecht. Utrecht: Educate-it.

    • Stes, A., De Maeyer, S. & Van Petegem, P. (2008). Een Nederlandstalige versie van de ATI: een valide instrument om onderwijsaanpak van docenten in het hoger onderwijs te meten? Pedagogische studiën, 85, 95-106.

    • Stes, A., De Maeyer, S. & Van Petegem, P. (2013). Examining the Cross-Cultural Sensitivity of the Revised Two-Factor Study Process Questionnaire (R-SPQ-2F) and Validation of a Dutch Version. PLOS ONE, 8(1), 1-7.

    • Trigwell, K., Prosser, M., & Waterhouse, F. (1999). Relations between Teachers’ Approaches to Teaching and Students’ Approach to Learning. Higher Education, 37, 57-70.

    • Vermunt, J.D., & Donche, V. (2017). A Learning Patterns Perspective on Student Learning in Higher Education: State of the Art and Moving Forward. Educ. Psych. Rev., 29, 269-299.

    • De Vries, U.R.M.T. (2019). ‘Blended learning’ in de studie Rechten. Hoe digitale middelen het juridisch onderwijs versterken. Ars Aequi, (3), 233-236.

    • Yildirim, I. (2017). The Effects of Gamification-based Teaching Practices on Student Achievement and Students’ Attitudes towards Lessons. Internet and Higher Education, 33, 86-92.

    Noten

    • 1 In this way, see the letter of the Dutch Minister of Education, Culture and Science on digitalisation of 16 October 2018.

    • 2 www.uu.nl/nieuws/educate-it-krijgt-extra-miljoenen-voor-verdere-digitalisering-onderwijs (last accessed on 19 May 2019).

    • 3 Utrecht University, Strategic Plan 2016-2020, available at www.uu.nl (last accessed on 19 May 2019).

    • 4 Staker & Horn 2012, p. 3. Critical on the term blended learning are Oliver & Trigwell 2005, who defend subverting the term and using it to describe an approach that focuses on the learner and its learning (instead of on the teacher). These authors suggest that an in-depth analysis of the variation in experience of learning of students in a blended learning context is needed in the future.

    • 5 De Vries 2019 (our translation).

    • 6 Furthermore, Schutgens 2019 stated that old-fashioned live-teaching and, above all, having offline students have their advantages.

    • 7 Biggs & Tang 2011.

    • 8 Biggs 1987b; Biggs & Tang 2011, esp. p. 24 et seqq.

    • 9 Postareff, Parpala & Lindblom-Ylänne 2015, p. 316 with references.

    • 10 McCray 2000. Furthermore, according to Yildirim 2017, p. 86, blended learning offers ‘various educational options to learners, minimizes the inequality of opportunity, provides individualized solutions pertinent to learning differences and eliminates hindrances related to space and time.’

    • 11 Mayer & Moreno 1998.

    • 12 Jacobson & Spiro 1995.

    • 13 Dihoff et al. 2004; Epstein et al. 2002

    • 14 Van Asten et al. 2019.

    • 15 See on this topic, e.g., Brame 2013; - It is not the first course in the law curriculum at Utrecht University in which this way of flipping the classroom has been used, a flipping the classroom concept was used already in the first course of the law curriculum (‘Foundations of Law’).

    • 16 See, e.g., Campbell et al. 2001.

    • 17 Baeten et al. 2010.

    • 18 Trigwell, Prosser & Waterhouse 1999.

    • 19 See already Marton & Säljo 1976.

    • 20 Chesterman 2016, p. 77.

    • 21 In the academic year 2018-2019 a project, financed by the Utrecht Education Incentive Fund (Faculty LEG), made it possible to create interactive materials (interactive knowledge clips in Scalable Learning) and to experiment with blended learning.

    • 22 In conducting this study the approach of Bishop-Clark and Dietz-Uhler 2012 has been followed. This study has a similar but slightly different structure, in which a different course has been studied, by Van Dongen & Meijerman 2019.

    • 23 This (optional) review by the Faculty’s Ethical Review Committee has been conducted in order to safeguard the ethical quality of the research. The Ethics Committee if LEG aims to stimulate and facilitate ethical conduct by the faculty with regard to the rights, safety and well-being of the participants in scientific research, i.e. of the students in ours study.

    • 24 Biggs 1987a; Biggs, Kember & Leung 2001, p. 133 et seqq. See also the Dutch version, received from the Centre of Expertise for Higher Education, University of Antwerp, see Stes, De Maeyer & Van Petegem 2013.

    • 25 Stes, De Maeyer & Van Petegem 2008.

    • 26 See Trigwell, Prosser & Waterhouse 1999, p. 62. See also Prosser & Trigwell 2006.

    • 27 The Dutch questionnaires mentioned in the previous footnotes as well as the questionnaires made in the context of a previous study (Van Dongen & Meijerman 2019) were the basis for the current questionnaires. The questionnaires were compared with the original English version, adapted to the specific field Introduction to Private Law and supplemented with a few questions. Some colleagues proofread and then we finalised the questionnaires.

    • 28 The scale is from 0 to 1, from totally not to perfectly homogeneous.

    • 29 These measurements were taken from the pre-course surveys. The Cronbach’s α of the post-course surveys was .777 (deep approach) and .767 (surface approach).

    • 30 Number 1 means a perfect correlation, 0 means no correlation at all. The sign in front of the number (- or +) indicates whether there is a negative correlation (if one variable increases, the other decreases) or a positive correlation (if one variable goes up, so does the other).

    • 31 The correlations were significant at the 0.001 level (2-tailed); - In order to test the hypothesis that students who did (partly or fully) use and the students who did not use Scalable learning were associated with statistically different exam results, degree of self-regulated learning and (differences in) deep and surface approaches, an equal random sample of the first group was taken and compared to the second group by means of an independent samples t-test. Additionally, the assumption of homogeneity of variances was tested via Levene’s test. With regard to self-regulated learning (both at the beginning of the course, at the end of the course as well as the difference) equal variances can be assumed but no significant differences existed. As to the exam result on question 4A no equal variance could be assumed, and no significant differences existed between the two groups. Also with regard to differences between pre- and post-course measurements of deep and surface approaches no equal variance could be assumed, but no significant differences between two groups existed.

    • 32 Although factor analysis showed five factors explaining more or less 52% of the variance, for this study we have chosen to stick to Biggs’s division of two factors. Starting from a fixed number of two in the factor analysis (Kaiser-Meyer-Olkin test), questions are arranged quite well, in accordance with the questions arranged by Biggs under the two approaches to learning. Nevertheless only 33% of the variance can be explained by the distinction between deep and surface approaches to learning (KMO of .836 showed that the size of number was very satisfying for the factor analysis). Apparently there are a lot of other factors present.

    • 33 See Van Dongen & Meijerman 2019.

    • 34 No significant difference in exam marks could be noted.

    • 35 A negative correlation was found between both approaches to teaching, Pearson’s r (639) = -.136, p = .001.

    • 36 DA2 = c + β1 x DA1 + β2 x CCFA + β3 x ITTF. In this regression model C is the constant, β1, β2 and β3 are the regression coefficients.

    • 37 This is in line with Van Dongen & Meijerman 2019, p. 562.

    • 38 Only 3,6% of the variance in change of deep approaches can be explained by looking only to teachers’ approaches to teaching. The linear regression was calculated to predict the changes in deep approach of students during the course only based on the teaching approaches taken by teachers. A significant regression equation was found (F(2,367) = 6.889, p = .001) with an R² of .036). The predicted deep approach at the end of the course (DA2) is equal to 1.185 – (0.092 x ITTF) – (0.252 x CSSF). Interestingly, ITTF is significant here.

    • 39 As the CCSF measure scored poor on its reliability not a lot of value can be given to the final part of this formula.

    • 40 Another strange result is the very weak but significant negative correlation existed between CSSF and SRL at the end of the course; Pearson’s r (450)= -0.104, p = .027.

    • 41 When ‘exam grades’ are mentioned here, both the final grade as well as their performance on the specific question about week 6 of the learning environment are aimed at. As there were no significant differences, no distinction is made in our discussion of the results here.

    • 42 Both correlations are significant at the 0.01 level (2-tailed).

    • 43 Van Dongen & Meijerman 2019, p. 562-563.

    • 44 Czerkawski 2014, p. 32, 35.

    • 45 Du, Yu & Olinzock 2011, p. 37.

    • 46 See Steenman 2016.


Print this article