Assessment in Undergraduate Competency-Based Medical Education: A Systematic Review

Background: Studies that have methodically compiled the body of research on the competency-based medical education (CBME) assessment procedure and pinpointed knowledge gaps about the structure of the assessment process are few. Thus, the goals of the study were to create a model assessment framework for competency-based medical education that would be applicable in the Indian setting as well as to thoroughly examine the competency-based medical education assessment framework. Methods: PubMed, MEDLINE (Ovid), EMBASE (Ovid), Scopus, Web of Science, and Google Scholar were the databases that were searched. The search parameters were restricted to English language publications about competency-based education and assessment methods, which were published between January 2006 and December 2020. A descriptive overview of the included research (in tabular form) served as the foundation for the data synthesis. Results: Databases provided 732 records; out of which 36 fulfilled the inclusion and exclusion criteria. Thirty-six studies comprised a mix of randomized controlled trials, focus group interviews, and questionnaire studies, including cross-sectional studies, qualitative studies (03), mixed-method studies, etc. The papers were published in 10 different journals. The greatest number was published in BMC Medical Education (18). The average quality score for included studies was 62.53% (range: 35.71-83.33%). Most authors are from the UK (07), followed by the USA (05). The included studies were grouped into seven categories based on their dominant focus: moving away from a behavioristic approach to a constructive approach of assessment (01 studies), formative assessment (FA) and feedback (10 studies), the hurdles in the implementation of feedback (04 studies), utilization of computer or online based formative test with automated feedback (05 studies), video feedback (02 studies), e-learning platforms for formative assessment (04 studies), studies related to workplace-based assessment (WBA)/mini-clinical evaluation exercise (mini-CEX)/direct observation of procedural skills (DOPS) (10 studies). Conclusions: Various constructivist techniques, such as concept maps, portfolios, and rubrics, can be used for assessments. Self-regulated learning, peer feedback, online formative assessment, an online computer-based formative test with automated feedback, the use of a computerized web-based objective structured clinical examination (OSCE) evaluation system, and the use of narrative feedback instead of numerical scores in mini-CEX are all ways to increase student involvement in the design and implementation of the formative assessment.


Introduction And Background
The goal associated with medical courses is to effectively prepare future physicians to meet the healthcare needs of the general public.The evaluation methods in India's previous medical education curriculum placed a greater emphasis on knowledge than on competencies.Summative evaluation had a higher degree of weight than formative evaluation with feedback.As a result, doctors trained under prior curricula had strong theoretical understanding but lacked practical experience and other soft skills like professionalism, ethics, and communication [1].The implementation of competency-based medical education (CBME) was primarily intended to address the aforementioned deficiencies [1,2].
The CBME framework places a strong emphasis on the competencies required to meet the needs of patients.This method places a strong emphasis on connecting teaching, learning, and assessment to actual medical practice.Effective CBME assessment is characterized by certain essential elements.It must be regular and ongoing.This will enable more formative assessments to be conducted to direct the students' development.The majority of the assessment must be work-based.A crucial element of CBME would be the direct observation and evaluation of real-world clinical interactions.The assessment instruments themselves have to adhere to a set of minimal requirements for quality in terms of reliability, validity, affordability, educational impact, and dependability [1,2].
In competency-based medical education, assessments are not meant to serve as a final judgment, but rather to help students advance to the next level of expertise.Even though written exams are typically employed to assess students, assessments that primarily rely on direct observation to measure skill performance offer more convincing proof that learning objectives have been met.Effective feedback is essential for supporting a learner's professional growth once evaluations have identified their strengths and areas for improvement.Formative assessments are given increasing importance, and feedback is a crucial component of them.Additionally, each student's proficiency is evaluated using objective, quantifiable criteria [3].
Competencies are discernible, quantifiable capabilities that educators want their students to acquire.Knowledge, skills, attitudes, and communication are outlined as areas of competency [3].In contrast to conventional education, which places a strong emphasis on subject matter, CBME doesn't start by considering the appropriate amount of content to teach.The main objective of CBME is to measure the outcomes of student performance that students must achieve to show their competency.As a result, the focus changes from assessing input to assessing output or outcomes [4].
A medical student must be evaluated for us to authorize them to handle human life.In the end, the healthcare educational community has a professional duty to the general population to guarantee that its students are capable enough to practice independently.Within the framework of CBME, assessment refers to the procedure that allows for the testing of knowledge, skills, and attitudes to determine competency [5].
The foundation of any curriculum is assessment.All medical educators would concur that assessments are still the most effective way to promote learning and that an assessment that is not connected with the learning objectives cannot accomplish its goals [6].However, a major weakness in the CBME is the lack of reliable and accurate assessment methods.Appropriate assessment techniques are crucial to the effective implementation of CBME; yet there is still a lack of clarity around the assessment instruments, process settings, and modalities and timeframes of CBME assessments [2,7].
The finest available evidence must serve as the foundation for the CBME evaluation framework's planning.We must use the most effective evaluation techniques to satisfy CBME's requirements.Multiple assessment instruments with low validity risks should be included.To make the CBME assessment plan more robust, many additional objective-type settings are required.It is also necessary to include qualitative as well as quantitative evaluation techniques, especially when evaluating non-cognitive skills [8,9].
We need to add more assessment tools to our toolkit for CBME.Given the abundance of evaluation possibilities, these tools ought to be simple to use, enable speedier decision-making for corrective action, be workable, and possess sufficient rigor to be deemed appropriate [10][11][12][13][14].
To solve several significant issues concerning the CBME assessment scheme, more data is required.However, there aren't many studies that have methodically compiled the body of research on the CBME assessments and pinpointed information gaps about the assessment process's framework.This study aims to fill that gap.Hence, one of the primary objectives of this study was a systematic review of the competency-based medical education assessment framework with reference to the assessment instruments, process settings, and modalities.Building a competency-based medical education model assessment system that may find application in the Indian setting was another objective of this study.

Information Sources
PubMed, EMBASE, Scopus, Web of Science, and Google Scholar were the databases that were searched.

Search Strategy
The search parameters were restricted to English-language publications pertaining to competency-based education and assessment methods that were published between January 2006 and December 2020.We incorporated the relevant database-specific restricted vocabulary words and keyword combinations for every topic in our search approach.Boolean operators were then used to combine these terms, which were then used for database searches.Competency-based medical education (CBME), competency, competence, clinical competence, clinical skills, assessment, assessor, assessment tools, feedback, criterion-referenced evaluation, simulation, objective structured clinical examination (OSCE), workplace-based assessment (WBA), mini-clinical evaluation exercise (mini-CEX), portfolio, multi-source feedback (MSF), reflective practice, and directly observed procedural skills (DOPS) were the key terms used.

Eligibility Criteria
Studies were included if the research's setting was competency-based medical education (CBME); if the research focused on assessment tools or activities associated with CBME; if the research was empirical primary research; if the research was quantitative, qualitative, or mixed-methods; if research works were published between January 2000 and December 2020; and if the article was written in English.
Studies were excluded if they were systematic reviews, narrative reviews, reviews, commentaries, evidencebased perspectives, educational forums, commentaries, short communications, gazetted notifications, guidelines, case reports, abstracts, and editorials, and if they were not published in peer-reviewed journals.

Data Extraction
Rayyan, a software for systematic reviews, was used to upload each article.A free online tool called Rayyan helps writers with systematic reviews with their literature screening.
Following a preliminary filter for duplicates and search stipulations, the remaining papers' titles and abstracts were examined.The articles' entire texts were then evaluated by the inclusion and exclusion criteria to make the final selection.
A two-step screening procedure was used.The first author finished the initial step, which involved removing any duplication and reviewing titles before moving on to abstracts.To determine which research qualified for inclusion, the two authors thoroughly reviewed each study in relation to the criteria in the second stage.To reach a consensus, the disagreements were discussed.Any differences of opinion on the suitability of the research were settled by consensus.
The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline was used for describing the identification, screening, eligibility, and final inclusion of the papers.The purpose of research study quality evaluation instruments is to evaluate specific research designs.An appraisal of the quality of evidence is often used to assess the risk of bias.More recently, quality appraisal tools have been framed for appraising reviews with diverse designs.Because this review included studies with various designs, the Quality Assessment Tool for Studies with Diverse Designs (QATSDD) was used to evaluate the article's quality.This tool was published in 2012 to appraise the methodological quality, evidence quality, and quality of reporting in reviews that included studies with heterogenous designs (i.e., qualitative, quantitative, mixed-and multi-methods research) using a single tool.It contains 16 reporting criteria scored on a scale from 0 to 3. The QATSDD tool has a total of 16 criteria, of which 14 apply to qualitative studies, 14 apply to quantitative studies, and all 16 apply to mixed methods research.The QATSDD criteria assess the following: theoretical framework, aims/objectives, sample size, representative sample of the target group, the rationale for the choice of data collection tool, the rationale for the choice of data collection tool, detailed recruitment data, statistical assessment of the reliability and validity of the measurement tools, the fit between stated research question and method of data collection, the fit between stated research question and format and content of data collection tool, fit between research question and method of analysis, justification for the analytical method selected, assessment of the reliability of the analytical process, evidence of user involvement in design, strengths, and limitations critically discussed.The quality score of each included study was used to assess the risk of bias.As mentioned above, the QATSDD tool consists of 16 items (with some items only applicable to quantitative and qualitative studies).It is scored on a Likert scale from 0 = "high risk of bias" to 3 = "minimal risk of bias" and has strong reliability and validity in scoring studies with various (mixed) designs.A QATSDD overall percentage score was calculated for each included study.A study with a "low risk of bias" has an overall QATSDD percentage score greater than or equal to 75%.A study with a "moderate risk of bias" has an overall QATSDD percentage score between 50 and 74%, while a study with a "high risk of bias" has an overall QATSDD score between 0 and 49% [15].

Data Synthesis
A standard data extraction form was used for the data extraction process.The study authors, the year of publication, the corresponding author's country, the journal, the study type, the study setting, the study population or sample size, data collection, and analysis method, study outcome, and quality score were extracted from the studies according to a standard format.Textual narrative synthesis was used for data synthesis and analysis.This involves synthesizing the findings from primary studies textually.This method was adopted because it has proved useful in synthesizing evidence of different types (qualitative, quantitative, mixed method, etc.).This approach enabled this review to synthesize findings from both qualitative and quantitative studies to provide a comprehensive synthesis of the research literature in this field.
A descriptive overview of the included research presented in tabular form served as the basis for the synthesis.The key findings of each of the included studies were tabulated by the reviewers to answer the research question.This systematic review came to a qualitative conclusion by analyzing, contrasting, and summarizing the findings of the different investigations.All data types were given equal weight since the results of all the empirical studies were incorporated into a narrative form for this systematic review.Following the extraction of data, the studies were analyzed and classified as competency-based medical education methods of assessment.We undertook conceptual mapping to identify themes within which to synthesize and present the findings of primary studies.The results were presented in a structured manner by dividing the studies into the following various homogenous categories based on their dominant focus: The evidence gathered through this systematic review was graded using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) method [16].

Description of the Included Studies
The 36 studies comprised a mix of randomized controlled trials (02), prospective cohort studies (01), quasirandomized trials/quasi-experimental studies (02), randomized crossover studies (01), controlled crossover trials (01), focus group interviews (02), questionnaire studies including cross-sectional studies (15), qualitative studies (02), qualitative studies with phenomenological designs (01), mixed-method studies (03), observational studies (02), multi-level analysis (02), feasibility studies (01), and principal component analysis (01).Table 1 lists the included studies along with their characteristics.The average quality score for included studies was 62.53% (range: 35.71-83.33%).Five studies had a "low risk of bias" with an overall QATSDD percentage score greater than or equal to 75%.Twenty-eight studies had a "moderate risk of bias" with an overall QATSDD percentage score between 50 and 74%, whereas three studies had a "high risk of bias" with an overall QATSDD score between 0 and 49% (Table 1
The included studies were grouped into the following seven categories based on their dominant focus (Table 2).
Transitioning from a behavioristic assessment method to a constructive one (i) Favourable correlation between how well one performed in the summative assessment and the online computer-based formative assessment with automated feedback.(ii) Online quizzes that provide relevant feedback, known as formative self-assessments, can assist students in evaluating their knowledge and pinpointing areas of weakness.

Video feedback
(i) Video feedback was more useful than the analytical checklist score.(ii) Real-world video feedback appears to be significantly linked to an increase in one's perception of one's own empathy.

e-learning platforms for formative assessments
(i) Potential for the Kahoot!platform to help with formative evaluation in medical education.(ii) Real-time data collection for formative assessments is possible with virtual learning environments (VLEs).(iii) The usage of tablet devices in OSCE exams is linked to better examiner comments.(iv) The primary benefits of audience response systems would be more student motivation and the creation of an engaging learning environment than an increase in assessment scores.

Workplace-based Assessment (WBA)/mini-clinical evaluation exercise (mini-CEX)/Direct Observation of Procedural Skills (DOPS)
(i) The degree of trainees' ambiguity over WBA's goal greatly influenced how they used it.Many trainees thought of WBA as a way to evaluate their learning, and some weren't sure what WBA was there for.(ii) Participants, in particular instructors, were not completely aware of the mini-CEX's formative aim.(iii) Low acceptance of mini-CEX by some learners may be caused by the high-pressure hospital setting with overlapping clinical responsibilities, the challenge of organizing the assessment, and the variable availability of time as well as the motivation of consultants to deliver quality feedback.

Transitioning From a Behavioristic Assessment Method to a Constructive One
A study by Harrison CJ (2016) demonstrated the advantages of abandoning behavioristic assessment methods that rely on incentives and punishments.It highlights the possible advantages of using three constructivist assessment tenets: step-by-step descaffolding to facilitate change towards a learning orientation, authenticity, and giving students a more active role [17].

Formative Assessment and Feedback
Kishan Prasad HL et al. (2020) concluded that by using the Objective Structured Practical Examination (OSPE) as a formative assessment tool, teaching-learning methodologies may be modified [18].A study by Lim YS concluded that most medical students value the use of formative evaluations in teaching them the skills of self-directed learning (SDL) [19].
Wolcott MD (2018) found that when included in a curriculum for health professionals, the multiple mini interview (MMI) can be a useful tool for the thorough evaluation of professional abilities.According to this analysis, the MMI is a valid method of assessment that may be successfully integrated into the health professions curriculum to evaluate specific professional abilities [20].
A study by Hossain S et al. (2014) demonstrated that summative assessment is significantly impacted by formative assessment in several ways.The input that instructors and students receive via formative assessments is crucial to the teaching-learning process.Additionally, formative evaluations inspire pupils to study regularly and pursue in-depth knowledge.However, too many formative assessments interfere with students' ability to learn independently, which has a detrimental impact on summative assessment [21].
A study by McKenzie S. (2017) proved that checklists, together with timely feedback help learners recognize their mistakes [22].Gonzalo JD (2014) concluded that it is important to support clinical educators in integrating tools for reflection and feedback into their bedside instruction [23].A study by Choi S. (2020) supported the advantages of process-oriented feedback over feedback that is outcome-oriented.Processoriented feedback aims to teach students new methods for achieving a standard instead of just telling them what is right or wrong [24].
A study by Kim EJ (2019) demonstrated that, compared to negative feedback, positive feedback resulted in higher levels of self-efficacy and stronger positive feelings.Positive reinforcement and conduct that promotes change in moderation should be a part of performance reviews, as this combination will improve both academic and emotional results [25].Uhm S et al. ( 2018) concluded that medical students can learn and reflect on important information by utilizing feedback-based communication skills assessments, which may result in better communication skills [26].Pelgrim EAM et al. (2012) proved that feedback's essence, delivery method, and integration into trainees' education are crucial [27].

Hurdles in the Implementation of Feedback
A study by Bok HJG et al. (2013) demonstrated that it was difficult to incorporate concurrent formative feedback and input for summative choices.Supervisors said that time restrictions prevented high-quality feedback from being provided, while students were reluctant to request assessments with feedback because they thought all workplace-based assessments (WBAs) were summative.The narrative formative input was deemed unhelpful by the trainees.The biggest obstacles in the upcoming years will be developing a clinical setting that is inherently supportive of feedback, such as by streamlining paperwork (e.g., by developing mobile-friendly evaluation tools) and providing supervisors and students with feedback training [28].
A study by Bates J. (2013) emphasized the need to get past the idea that assessment and feedback are just a collection of procedures and abilities and realize that, in order to be effective, these processes and abilities must be integrated into interactions and learning environments that are helpful.When assessment and feedback are longitudinal and integrated into routine patient care, they become productive.This way of thinking is especially crucial for students to embrace constructive criticism and cultivate introspective practices [29].
Harrison CJ et al. (2013) found that it seemed that better achievers used feedback more for positive reinforcement than for diagnostic data.It has been discovered that rather than trying to alter their conduct, trainees, and students were looking for comments to boost their confidence.After an exam, we must create feedback in a way that will most effectively involve the students who require the greatest assistance [30].Feller K and Berendonk C (2020) found that feedback from supervising physicians and allied healthcare professionals (AHPs) has a compounding impact, provides insight into the performance from many angles, and helps paint a broader picture.Within the framework of workplace-based evaluation, interprofessional feedback seems to serve as a means of mutual learning [31].

Computer or Online-Based Formative Test With Automated Feedback
A study by Mitra NK et al. (2015) found that in a multidisciplinary integrated module of the third-year MBBS program, there was a favorable correlation between how well one performed in the summative assessment and the online computer-based formative assessment with automated feedback.It was determined that any rise in the usage of computer-based formative assessments with automated feedback will result in a marginal improvement in the student's summative assessment score since the learning process will be improved [32].
Bijol V (2015) demonstrated that online quizzes that provide relevant feedback, known as formative selfassessments, can assist students in evaluating their knowledge and pinpointing areas of weakness.This enables timely interventions that effectively support student learning.It was discovered that students who chose to take quizzes performed well on the final test in every category [33].Palmer E and Devitt P. (2014) proved that the internet is a useful and well-acknowledged tool that may encourage student-centered learning and offer prompt formative feedback.Nevertheless, creating high-quality content takes time [34].A study by Kühbeck F (2019) demonstrated that online tests that provide formative feedback enable students to more accurately measure their academic achievement and knowledge base [35].A study by Ode GE, 2019 proved that it is possible to implement the instant feedback program using an electronic platform, and it provides replicable construct validity [36].

Video Feedback
Karn BS (2019) demonstrated that the clinical performance assessment (CPA) of medical education found that the video feedback was more useful than the analytical checklist score.Performance competence scores were greater in the experimental group that got video feedback [37].Dohms MC (2020) proved that realworld video feedback (VF) appears to be significantly linked to an increase in one's perception of one's own empathy [38].

E-Learning Platforms for Formative Assessments
A study by Ismail MAA (2019) proved that there is potential for Kahoot!platform to help with formative evaluation in medical education.A popular free formative assessment tool in education is Kahoot!, an interactive platform for game-based learning.With Kahoot!, educators may design four distinct game-based learning experiences: surveys, jumbles, quizzes, and debates where participants compete with one another.Therefore, Kahoot! should be integrated into the educational endeavors of the health profession, especially for formative evaluation [39].A study by Jafri L et al. (2020) found that it is prudent to promote the implementation of virtual learning environments (VLEs) in the context of the WBA program.Real-time data collection for formative assessments is now feasible because of the development of VLE and related software systems [40].Denison A. et al. (2016) found that, when compared to the conventional, paper-based data-gathering method, the usage of tablet devices in OSCE exams is linked to better examiner comments for use as feedback [41].
The audience response system did not have a beneficial long-term effect on evaluation findings, according to research by Schmidt T. et al. (2020).The primary benefits of audience response systems would be more student motivation and the creation of an engaging learning environment than an increase in assessment scores [42].

Workplace-Based Assessment (WBA)/Mini-Clinical Evaluation Exercise (Mini-CEX)/Direct Observation of Procedural Skills (DOPS)
The study by Nair BR et al. (2015) showed that both assessors and learners said the WBA process was beneficial and offered excellent chances for performance improvement [43].Lefroy J. (2015) researched to directly compare WBA feedback in the undergraduate medical program with and without the use of marks.78% of middle-stage medical students expressed a desire for marks, and it was shown that marks can be beneficial when they are connected to formative evaluation [44].Gaunt A. (2017) concluded that the degree of trainees' ambiguity over WBA's goal greatly influenced how they used it.Many trainees thought of WBA as a way to evaluate their learning, and some weren't sure what WBA was there for [45].A study by Liang Y and Noble LM (2020) indicated that participants, in particular instructors, were not completely aware of the mini-CEX's formative aim [46].The findings of the study by Berendonk C (2018) put a question mark on the validity of mini-CEX domain scores for formative purposes [47].Rogausch et al. (2015) found that narrative feedback is more appropriate and seems to have more informative value than quantitative mini-CEX ratings [48].In contrast, a study by Suhoyo Y et al. showed that, regarding the mini-CEX's usefulness and its effect on professional growth and learning, both experts and students agreed to a strong agreement [49].Liao KC (2013) concluded that before assessors may be trained, faculty development is necessary to carry out a successful mini-CEX evaluation program [50].The findings of the 2013 study by Tokode OM et al. about the educational aspect of mini-CEX show that participants reported learning new clinical skills, correcting incorrect clinical competencies, and increasing their knowledge via the mini-CEX assessment.The low acceptance of mini-CEX by some learners may have been caused by the high-pressure hospital setting with overlapping clinical responsibilities, the challenge of organizing the assessment, and the variable availability of time as well as the motivation of consultants to deliver quality feedback [51].Bansal M et al. (2019) concluded that due to its ability to assess candidates at the "does" level, DOPS is an effective tool [52].

Application of Different Constructivist Tools
Various constructivist instruments can be utilized to evaluate students' learning, performance, and advancement [53].Some of these tools are concept maps, portfolios, and rubrics.

Concept Maps
Concept mapping allows learners to organize important ideas spatially rather than in a sequential or semantically ordered manner.It can also include iconography and visual aids.Concept maps are a useful tool for formative assessment because they allow students to create various visual representations of what they have learned [54].

Portfolios
A student's portfolio is a deliberate compilation of their own work.It is an ongoing log of the writing abilities of the pupils throughout time.Additionally, it serves as a live example for pupils to see what they have accomplished or not accomplished [55].

Rubrics
Rubrics are evaluation scales that are especially useful for assessments of tasks or performance.A rubric is a document that lists the requirements for an assignment and uses that information to clarify what is expected of it.One tool for grading student work is a rubric.Rubrics let students reflect on and assess themselves [56].A collection of scoring instructions or descriptive scoring techniques is called a rubric.The purpose of the rubric should be to serve as a tool for formative evaluation to encourage critical thinking and enhance constructivist teaching strategies.Rubrics are a useful tool for formative assessment because they promote clear learning intentions and give instructors and students a foundation for goal setting, feedback, and peer assistance [57].

Increasing Student Involvement in the Design and Implementation of the Formative Assessment
Learners learn most effectively when they are actively involved in the process, driven to assess their knowledge against predetermined benchmarks, and provided with timely, focused help to meet their learning requirements.In general, this point of view supports a process of assessment that centers on the active participation of students and uses formative test results to set particular knowledge goals for both students and instructors [58].

Self-Regulatory Learning
Students taking ownership of their own learning and engaging in fruitful formative assessments may both benefit from self-regulatory learning (SRL).Through ongoing, deliberate interactions between teachers and students that were performance-and learning-directed, formative assessment helped students develop their capacity for self-regulation.Theorists concur that when learners develop the ability to adapt and selfdirected learning traits necessary for greater involvement with the process of learning and subsequently good performance, SRL is associated with improved academic achievements and motivation [59].

Peer Review or Peer Feedback
One way that students can provide comments on each other's work is through peer evaluation or peer review.Peer review plays a significant role in fostering a culture of more active learning in the classroom.In both academic and professional contexts, peer evaluation may be employed as a tactic to increase students' interest in their own learning.The cooperative nature of peer evaluation is related to both the larger objectives of lifelong learning and professional partnership.Peer assessments in either large or small groups can serve as the basis for written or verbal peer feedback [60].The process of conducting an efficient peer assessment is viewed as more complex than just introducing a suitable assessment tool, despite the benefits and theoretical support for peer assessment.Participants and facilitators alike acknowledge their reluctance to engage with these tools [61].

Online Formative Assessment
It has been discovered that the formative evaluation conducted online fosters student engagement and the growth of learners.To support and modify their self-regulated learning, online formative assessment also allows students to evaluate themselves and get feedback [62].Student involvement and passion for learning were aided by e-assessments.The outcomes showed how important e-assessments were to the process of teaching and learning [63].Additionally, it has been discovered that formative evaluation using computeradministered multiple-choice questions positively impacts student activation.Online formative assessment can take many different forms (e.g., practice tests, one-minute papers, tasks involving the clearest or muddiest points, different group projects in the classroom, etc.) [64].

Online Computer-Based Formative Test with Automated Feedback
The benefit of objective marking by predetermined scoring criteria is provided by computer-based assessment systems, which assess the results seamlessly and without regard to the subjects or assessment scenarios [65].The benefits of computer-based assessment (CBA) in formative assessments are mostly associated with the speed and timing of computer-generated (detailed) feedback as well as the test's question selection flexibility.Higher learning outcomes may result from the ability to provide pupils with timely feedback while they are completing the exam.Electronic feedback in courses delivered online has been found to improve student learning [66].

Use of Computerized Web-Based Objective Structured Clinical Examination (OSCE) Evaluation System
Digital recordings, films, and computer files can be made of student performances.It was discovered that using electronic software made it easier to analyze the whole set of data, saving a significant amount of time.Comparably, the use of an electronic system greatly reduced the amount of time required for the results analysis, freeing up a greater amount of time for data comprehension, improved curriculum creation, and advancements in clinical teaching.The online OSCE deployment demonstrated acceptability, affordability, and viability [67].This technique makes it easier for both the assessor and examinee to provide feedback, making it a potentially helpful tool for skill evaluation and instruction [68].
Use of Narrative Feedback Rather Than Numerical Scores in Mini-CEX Some recent research and recommendations concluded that numerical mini-CEX scores might be removed from the forms because they don't seem to provide much information.When it comes to informing learners about changes in practice, narrative feedback works better than 'checkboxes'.It is important to look for ways to either increase the WBA ratings' informative usefulness or avoid using them altogether in favor of narrative comments alone.Directly observing student-patient interactions is valuable because it provides rich narrative feedback that sparks important conversations among learners and trainers.
A model framework of assessment for competency-based medical education that will be relevant in the Indian context (Table 3).

Formative feedback by online assessments/e-learning platforms
(i) For feedback delivery, digital mediums should be used, as they can lead to a more participatory feedback process, enhanced comprehension and higher-order thinking abilities, more genuine, supportive, and personal contact, and more detailed, tailored feedback.
(ii) The use of video feedback may foster an environment of participatory assessment, build a meaningful relationship with our students, and encourage a mindset of development and attention to detail.

Virtual learning environment (VLE) for WBA
(i) Curriculum communications, assessment and progress data, and lesson plans may all be provided through the VLE (smartphones, tablets, computers, etc.).(ii) By providing immediate feedback, a VLE may generally improve formative assessment.

Peer Feedback
(i) Peer feedback enhances self-reflection and offers prompt remarks on students' work.(ii) Students also get the chance to think about what they have done and how it could be boosted when they review one another's work; (iii.)Peer review can be difficult, but, when teachers set clear expectations, students can participate in constructive peer review.

Peer-Assisted Learning (PAL)
(i) An effective PAL technique may support both peer and self-assessment.

Embedded assessments
(i) By incorporating targeted activities that let students show off their present knowledge and abilities without needing to pause for an official exam, we can integrate assessments into their lessons.An online platform for instructional material access is called a virtual learning environment, or VLE.
Computers or smartphones can be used for this.Curriculum communications, assessment and progress data, and lesson plans may all be provided through the VLE.This makes them easily accessible to trainers and/or students, who may then utilize them to customize the learning process.It is possible to provide real-time verbal or video interaction in virtual learning environments.This will enable instruction in real-time.This could also involve some on-screen engagement, like a virtual whiteboard or screen sharing, for instance.
Students will find significant value in the immediate feedback that a VLE provides following the observation stage.With this, they may use it to assume more responsibility for their own learning.By providing immediate feedback, a VLE may generally improve formative assessment [70].

Peer Feedback
Peer feedback enhances self-reflection and offers prompt remarks on students' work.Feedback may frequently be completed more rapidly when students evaluate one another's work than when the instructor reviews each student's work separately.Students also get the chance to think about what they have done, how it could be boosted, and whether it fulfills assignment requirements when they review one another's work.Peer review can be difficult since friendships and culture might influence it.However, when teachers set clear expectations and ask engaging questions, students can participate in constructive and beneficial peer review [71].

Peer-Assisted Learning (PAL)
Activities related to peer-assisted learning (PAL) involve individuals from comparable social groups who are not certified instructors assisting one another in the learning process.PAL is widely acknowledged and used as an instructional strategy in health professional courses.It involves a socialization process and frequently involves younger and senior students serving as mentees and mentors, respectively.PAL exercises offer a structure that allows students to hone and improve their teaching and learning abilities.Students learn from and alongside one another through the utilization of common resources and the contributions of their diverse experiences.An effective learning technique may support both peer and self-evaluation [72].

Embedded Assessments
Assignments, exercises, or activities that are completed during training but are utilized to gather assessment information on a specific learning objective are known as embedded assessments.By incorporating targeted activities that let students show off their present knowledge and abilities without needing to pause for an official exam, teachers integrate evaluation into their lessons.The student's work can be assessed by the instructor and/or assessors; a rubric is commonly used in this process [73].

Limitations of the study
With respect to the methodology used, there is a chance that not every accessible study will be identified in a systematic review.Despite using a strict methodology, it's possible that some research was overlooked.Additionally, research works published in other languages may have been overlooked, as only Englishlanguage publications were considered.This also applies to research in conference proceedings, book chapters, or gray literature.
We planned to use The Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach to evaluate the quality of the evidence gathered through this systematic review; however, we were unable to assess the quality of the evidence for the recommendations.The included studies' heterogeneity made it impossible to apply GRADE to assess the research's quality.
Using the GRADE approach to score the included research proved difficult, especially in cases where assessments were impacted by various circumstances in the setting of undergraduate competency-based medical education.A conscious choice was made to label these recommendations as "not rated" in response to this.
Please note that an a priori protocol was prepared before undertaking the systematic review, and it was published in the Journal of Education and Health Promotion [74].

Conclusions
It has been noted that summative assessments are significantly impacted by formative assessments in several ways.However, too many formative assessments interfere with students' ability to learn independently, which has a detrimental impact on summative assessment.Compared to negative feedback, positive feedback results in higher levels of self-efficacy and stronger positive feelings.When assessment and feedback are longitudinal and integrated into routine patient care, they become productive.It has been discovered that rather than trying to alter their conduct, trainees, and students were looking for feedback to boost their confidence.Feedback from allied health care professionals (AHPs) has a compounding impact, provides insight into performance from many angles, and helps paint a whole picture.Any rise in the usage of computer-based formative assessments with automated feedback will result in an improvement in the student's summative assessment score since the learning process will be improved.It is possible to implement the instant feedback program using an electronic platform, and it provides replicable construct validity.It is prudent to promote the implementation of virtual learning environments (VLEs) in the context of the WBA program.Real-time data collection for formative assessments is now feasible because of the development of VLE and related software systems.The WBA approach, like all other formative assessment techniques, functions best when it is integrated into the workplace, offers targeted feedback, and is implemented timely.
The participants, in particular instructors, were not completely aware of the mini-CEX's formative aim.A question mark has been put on the validity of mini-CEX domain scores for formative purposes.This systematic literature review highlighted that formative feedback via online assessments and e-learning platforms, virtual learning environments (VLE) for WBA, peer feedback, and peer-assisted learning (PAL) are all necessary components of a model framework of assessment for competency-based medical education that will be applicable in the Indian setting.Therefore, this study proposes the following recommendations: Formative feedback by online assessments and video feedback: For feedback delivery, digital mediums should be used.The use of video feedback may facilitate an environment of participatory assessment.Virtual learning environment (VLE) for WBA: Curriculum communications, assessment and progress data, and lesson plans may all be provided through the VLE, as it generally improves formative assessment.Peer Feedback and Peer-Assisted Learning (PAL): Through peer feedback, students get the chance to think about what they have done and how it could be boosted.An effective learning PAL technique may support both peer and self-assessment.Embedded assessments: An important tool for the improvement of learning and teaching may be embedded assessment tasks in the framework of instruction.For embedded assessment, we advise developing a toolbox.

Final
Medical students (n=87).Videorecordings of 26 students were randomly selected for qualitative analysis.Communication skills scores before and after receiving feedback based on qualitative analysis.Incorporating feedback for communication skills assessment gives essential information to learn and selfreflect 58.33 Pelgrim Semi-structured Interviews data The content of feedback, the 2024 Gupta et al.Cureus 16(4): e58073.DOI 10.7759/cureus.

71 TABLE 1 : 18 Journals
of mini-CEX, which involved 97 residents and 139 evaluators In the first phase, two-hour mini-CEX workshop.In the second phase, the data of monthly mini--one trainees from the 2007/2008 cohorts of house officers in the two hospitals were invited for the study.Fifty (83.3%) trainees responded to the invitation to participate in the study.Data were collected via semitests the candidate at the ''does'' level 35.The included studies along with their characteristics 2024 Gupta et al.Cureus 16(4): e58073.DOI 10.7759/cureus.580737 of The papers included in this systematic review were published in 10 different journals.The greatest number of articles were published in BMC Medical Education (18), followed by Medical Education (05), Medical Education Online (03), Medical Science Educator (02), Perspectives on Medical Education (02), Advances in Medical Education and Practice (02), Indian J Otolaryngol Head Neck Surg (01), Journal of Surgical Education (01), Bangladesh Journal of Medical Education (01) and International Journal of Medical Education (01).
(i) Need to abandon behavioristic assessment methods that rely on incentives and punishments.(ii) Recommendation of using three constructivist assessment tenets: step-by-step descaffolding to facilitate change towards a learning orientation, authenticity, and giving students a more active role.Formative assessment and feedback (i) Using the Objective Structured Practical Examination (OSPE) as a formative assessment tool.(ii) Multiple mini-interviews (MMI) can be a useful tool.(iii) Too many formative assessments interfere with students' ability to learn independently.(iv) Checklists, together with timely feedback, help learners recognize their mistakes.(v) Process-oriented feedback is advantageous over feedback that is outcomeoriented.(vi) Compared to negative feedback, positive feedback results in higher levels of self-efficacy and stronger positive feelings.(vii) Feedback-based communication skills assessments may result in better communication skills.Hurdles in the implementation of feedback (i) Time restrictions prevented high-quality feedback from being provided by supervisors.(ii) Students were reluctant to request assessments with feedback because they thought all workplace-based assessments (WBAs) were summative.(iii) Narrative formative input was deemed unhelpful by the trainees.(iv) Developing a clinical setting that is inherently supportive of feedback.(v) Providing supervisors and students with feedback training Computer or online-based formative test with automated feedback

TABLE 3 : A model framework of assessment for competency-based medical education that will be relevant in the Indian context
[69]ssment feedback may change as a result of digital input, especially in online learning settings.Teachers can record video or audio footage with transcripts so that students can comprehend the transcribed comments and hear the instructor's tone, in addition to providing thorough textual material entered straight into a student's digital document.A more participatory feedback process, enhanced comprehension, and higher-order thinking abilities, more genuine, supportive, and personal contact, and more detailed, tailored feedback are all advantages of receiving feedback through the digital medium.By using video feedback, you may foster an environment of participatory assessment, build a meaningful relationship with your students, and encourage a mindset of development and attention to detail[69].