• Users Online: 235
  • Print this page
  • Email this page


 
 
Table of Contents
ARTICLE
Year : 2017  |  Volume : 18  |  Issue : 2  |  Page : 12-18

Curriculum evaluation: Using the context, input, process and product (CIPP) model for decision making


Nursing Tutor, Himalayan College of Nursing, Dehradun, India

Date of Web Publication9-Jun-2020

Correspondence Address:
Login to access the Email id

Source of Support: None, Conflict of Interest: None


Rights and PermissionsRights and Permissions
  Abstract 


Evaluation is a systematic investigation of the value of a program. More specifically an evaluation is a process of delineating, obtaining, reporting, and applying descriptive and judgmental information about some object’s merit, worth, probity, and significance. A sound evaluation model provides a link to evaluation theory, a structure for planning evaluations, a framework for collaboration, a common evaluation language, a procedural guide, and standards for judging evaluation. This model tends to observe the obtained result than pretending to get the result as per the preset goal or the expected outcome of a curriculum. One evaluation model that is used widely to evaluate a curriculum or programme is the Context, Input, Process, and Product (CIPP) model. This article attempts to describe the CIPP model and explains the application of the model in a research project.

Keywords: evaluation, CIPP model, framework


How to cite this article:
Vishnupriyan M. Curriculum evaluation: Using the context, input, process and product (CIPP) model for decision making. Indian J Cont Nsg Edn 2017;18:12-8

How to cite this URL:
Vishnupriyan M. Curriculum evaluation: Using the context, input, process and product (CIPP) model for decision making. Indian J Cont Nsg Edn [serial online] 2017 [cited 2021 Apr 19];18:12-8. Available from: https://www.ijcne.org/text.asp?2017/18/2/12/286264






  Introduction Top


Evaluation is a vital step that is included in any activity. Evaluation is a process of delineating, obtaining, and providing useful information for judging decision alternatives. It is a method of ascertaining the relative value of competing alternatives (Stufflebeam, 1983). Evaluation is an integral part of curriculum planning and development. Curriculum evaluation can be defined as the assessment of the merit and worth of a program of studies, a field of study, or a course of study (Glatthorn, Boschee, & Whitehead, 2009). It is a process which looks at whether the course of study and the planned teaching and learning activities produce the desired end result and deciding what aspects of the curriculum need improvement. Evaluation does not imply that it has to be executed at the end of the development process. It entails a sequence of activities that are carried out continuously from the beginning till the end along with planning. Evaluation essentially is collecting information against established criteria to make informed decisions related to the curriculum ofinterest (Keating, 2014).

The approaches to evaluation of a programme or curriculum may vary based on the focus of evaluation. In general, five approaches have been delineated by Stecher and Davis (1987). The evaluators can have an experimental focus, or goal oriented focus or decision based focus and base the evaluation on the outcome or objectives or the information needed for making decisions. The user focused evaluator and the responsive evaluator discern the needs of the users and the people who have stakes in the programme/curriculum. The evaluator can choose the approach that best suits their need (see [Table 1]).
Table 1: Five Approaches to Evaluation

Click here to view


In addition to the approaches, mentioned in [Table 1] evaluators may decide on a model on which the approaches can be framed. Many evaluation models are suggested for carrying out the task of identifying the merits and worth of a curriculum (Glatthorn et al., 2009). Bradley’s (1985) Effectiveness Model looks at how effectively the curriculum meet ten pre-established indicators. Tyler’s (1969) Objective- centered Model, one of the oldest model which is still being used extensively, measures the worth and merit of the learning based on the established objectives. Scriven’s (1966) Goal-free Model questions the importance of goals and objectives in an educational programme and redirects the attention of the evaluators to the unintended effects that can be produced by a curriculum. Stake’s (1967) Responsive Model is based on the assumption that the concerns of the stakeholders for those whom the evaluation is done should be a priority in determining the evaluation issues. The Context, Input, Process, and Product (CIPP) model established by the Phi Delta Kappa committee chaired by Stufflebeam (1971) created a great interest among educationalists because of the emphasis the model gave to evaluative data which helped in making decisions about curriculum. All these models provide the evaluators a framework for planning and carrying out the evaluation process. Any one model or combination of concepts from different models may be used in the evaluation process depending on the need.


  CIPP - Decision Focused Model Top


One very useful model to educational evaluation is known as the CIPP approach, developed by Stufflebeam (1983). The model provides a systematic way of looking at many different aspects of the evaluation process. The concept of evaluation underlying the CIPP Model is that “evaluations should assess and report an entity’s merit (its quality), worth (in meeting needs of targeted beneficiaries), probity (its integrity, honesty, and freedom from graft, fraud, and abuse), and significance (its importance beyond the entity’s setting or time frame), and should also present lessons learned” (Stufflebeam, 2007).


  CIPP- Core Concepts Top


CIPP, a decision-focused approach, is designed to evaluate and emphasize the systematic provision of information for any programme management. In this approach, information is seen as most valuable when it helps managers to make better decisions and therefore evaluation activities are planned to coordinate with the decision needs of workers (Datta, 2007). The CIPP model is based on a cycle of planning, structuring, implementing, reviewing and revising the decisions as per the need. Each part of the cycle is examined through a different aspect of evaluation Context, Input, Process, and Product (Stufflebeam, 2003 a). Context evaluations assess the needs, problems, assets, and opportunities. Input evaluations assess alternative strategies, action plans, staffing, and budget needs. Process evaluation assess show plans are implemented and the product evaluations identify and assess the outcomes of the programme (see [Table 2]).
Table 2: Core Concepts of CIPP

Click here to view



  The CIPP - Evaluation Modes Top


The CIPP evaluation model can be used in both formative and summative modes (Stufflebeam, 2003b). In the formative type, based on the four core concepts, the model guides an evaluator to ask (a) what needs to be done? (b) how should it be done? (c) is it being done? (d) is it succeeding? In the summative form the evaluator uses the already collected information to address the following retrospective questions (a) were important needs addressed? (b) was the effort guided by a defensible plan and budget? (c) was the service design executed competently and modified as needed? (d) did the effort succeed? Overall, the purpose of the CIPP evaluation is not to prove but to improve (Stufflebeam, 2003b) and hence is a useful evaluation tool in curriculum or project evaluation.


  CIPP - Collecting Data for Evaluating Effectiveness Top


To answer all questions pertaining to CIPP the evaluator needs to use rigorous and ethical data collection procedures (Stufflebeam, 2003b). Although the CIPP model purports an objectivist stand, Stufflebeam encourages the evaluators to use multiple methods that include both qualitative and quantitative designs to get a wider perspective of each component. The model also encourages the use of ‘home made’ instruments which can be context specific.

The CIPP model is thus a widely used model in programme and curriculum evaluation. It has been used to evaluate the effectiveness of introducing academic advisor for student guidance and to evaluate programmes at masters and undergraduate levels (Allahvirdiyani, 2011). It has been developed based on strong evidence from literature, describes understandable steps in evaluation and encourages the evaluators to use multiple methods to collect and analyze pertinent data that is required for evaluation.


  Application of CIPP Evaluation Model Top


The following example illustrates the application of the CIPP model in evaluating the effectiveness of a learning programme implemented for children with Dyslexia. Dyslexia is a learning disability in which the child has difficulties in reading, writing, and spelling. These children generally have above average intelligence but have problems with language development, memory, and sequencing (Dyslexia Association of India, 2016). The world wide incidence of dyslexia is about 5 -20% and about 15% of children in India struggle with dyslexia (Vardhan, 2015).

Understanding the learning style and ability, and applying teaching methods that involve all senses (auditory, visual, kinesthetic, and tactile) is important in helping children with dyslexia to develop language skills (Gilakjani & Ahmadi, 2011).

An experimental study on effectiveness of a Visual Auditory Kinesthetic Tactile Technique [VAKT] on reading level among the dyslexic children was carried out at one of the major South Indian city. The main aim of the study was to bring positive outcome of the reading level by examining the effectiveness ofresearcher prepared VAKT technique on reading level among dyslexic children (Jeyasekaran, 2015). The CIPP evaluation model was used as a framework in the study and the application of the model components are being explicated as an exemplar (see [Figure 1]).
Figure 1: CIPP model applied to study on VAKT teaching technique and reading levels of Dyslexic children

Click here to view


Context

The context evaluation is related to planning decision. The justification and reason for the study ) was evaluated on the basis of the investigator’s interest in the population and the evidence from the literature (Jeyasekaran, 2015). Decisions were made on what intervention is appropriate and what needs to be done to bring positive outcome in the reading level of children with Dyslexia. As part of the context evaluation, the investigator had to understand the level of existing reading capabilities in Dyslexic children, and what interventions can be designed. The information based on this evaluation assisted the investigator to plan the steps of the whole study.



Input

Input evaluation is carried out for structuring decisions. The evaluation of the input process involved exploring and understanding the Dyslexic children’s socio-demographic characteristics which helped in deciding on selecting the subjects who will benefit from the intervention. Developing and validating context specific VAKT intervention was another important part of input assessment. The investigator further had to evaluate and make decisions on the data collection procedures considering the age, the comprehension ability of the children and select the setting where the intervention can be applied with least possible hindrance/challenges. The input assessment enabled the investigator to structure the methods of the study.

Process

Process evaluation helps in implementing decision. It looked at the pilot study results to finalize the design and the intervention which enabled conducting a pretest to assess the pre-intervention reading level. The structured process was implemented as planned for the selected subjects with dyslexia. VAKT was implemented for 30 days in a stepped wise fashion based on the needs and difficulty of selected subjects.



Product

Product evaluation relates to decisions regarding reviewing and recycling. The final outcome of the programme shows the effectiveness of the intervention used in which there is improvement in reading level of the Dyslexic children. The outcome evaluation was done by conducting a posttest and looking at the difference between pre and post test assessment. The results revealed that there was a 12 % increase in the reading level of the children who were part of the study. This result enabled the investigator to implement the VAKT technique as a regular educational method for children with Dyslexia in specific settings.

The example explained in [Figure 1] depicts how the core concepts in the CIPP model have influenced the evaluation and decisions in the study. There are many strengths and weaknesses in the CIPP model (Stufflebeam, Madaus, & Kellaghan, 2000).


  Conclusion Top


Evaluators should employ a sound concept of evaluations and the CIPP model provides one tested, comprehensive framework for evaluators and clients. The CIPP model is also adaptable for use in long-term evaluation that involves both ongoing and at the final point of reviewing and revising the programs. The model has been widely applied and developed. Evaluators from various fields who have applied the model have included government officials, foundation officers, program and project staff, international assistance personnel, school administrators, programme managers, physicians, and military leaders. The model is configured for use in internal evaluations conducted by an organization’s evaluators. Thus CIPP model proves to be a good model for evaluation and can be applied effectively for curriculum, course or programme evaluation in any setting.

Conflicts of Interest: The author has declared no conflicts of interest.





 
  References Top

1.
Allahvirdiyani, K. (2011). Evaluate implemented academic advisor of Shahed students in Tehran State Universities through CIPP evaluation model. Procedia-Social and Behavioral Sciences, 15, 2996-2998.  Back to cited text no. 1
    
2.
Bradley, L. H. (1985). Curriculum leadership and development handbook. Englewood Cliffs, NJ: Prentice Hall.  Back to cited text no. 2
    
3.
Datta, L. E. (2007). Evaluation theory, models, and applications, by Daniel L. Stufflebeam and Anthony J. Shinkfield. San Francisco: Jossey-Bass.  Back to cited text no. 3
    
4.
Dyslexia Association of India (DAI). (2011-2016). Dyslexia. Retrieved from http://dyslexiaindia.org.in/what- dyslexia2.html  Back to cited text no. 4
    
5.
Gilakjani, A. P., & Ahmadi, S. M. (2011). Visual, auditory, kinaesthetic learning styles and their impacts on English language teaching. Journal of Studies in Education, 2(1), 104-113.  Back to cited text no. 5
    
6.
Glatthorn, A. A., Boschee, F., & Whitehead, B. M. (2009). Curriculum leadership: Strategies for development and implementation. Los Angeles: Sage.  Back to cited text no. 6
    
7.
Jeyasekaran, J. M. (2015). Effectiveness of visual auditory kinesthetic tactile technique on reading level among dyslexic children at Helikx Open School and Learning Centre, Salem. International Journal of Medical Science and Public Health, 4(3), 315-319.  Back to cited text no. 7
    
8.
Keating, S. B. (Ed.). (2014). Curriculum development and evaluation in nursing. Newyork: Springer Publishing Company.  Back to cited text no. 8
    
9.
Scriven, M. (1966). Social science education consortium. The methodology of evaluation. Indiana: Social Science Education Consortium.  Back to cited text no. 9
    
10.
Stake, R. (1967). The Countenance of educational evaluation in teachers college. Retrieved from http ://www. tcrecord.org/ library  Back to cited text no. 10
    
11.
Stecher, B., & Davis, W. A. (1987). How to focus an evaluation. California: Sage.  Back to cited text no. 11
    
12.
Stufflebeam, D. L. (1971). An EEPA interview with Daniel L. Stufflebeam. Educational Evaluation and Policy Analysis, 2(4), 85-90.  Back to cited text no. 12
    
13.
Stufflebeam, D. L. (1983). The CIPP model for program evaluation. In Evaluation models . Dordrecht : Springer.  Back to cited text no. 13
    
14.
Stufflebeam, D. L., Madaus, G. F., & Kellaghan, T. (Eds.). (2000). Evaluation models: Viewpoints on educational and human services evaluation. Boston: Springer Science & Business Media.  Back to cited text no. 14
    
15.
Stufflebeam, D. L. (2003a). The CIPP model for evaluation. The International Handbook of Educational Evaluation, 9, 31-62.  Back to cited text no. 15
    
16.
Stufflebeam, D.L., (2003b). The CIPP model for evaluation. Paper presented at the Annual Conference of the Oregon Program Evaluators Network, Portland, Oregon.  Back to cited text no. 16
    
17.
Stufflebeam D.L. (2007). CIPP evaluation model checklist : A tool for applying the CIPP Model to assess long-term enterprises. Retrieved from https: //www .wmich. edu/ sites/ default/files /attachments /u350/2014/ cippchecklist_mar07.pdf  Back to cited text no. 17
    
18.
Tyler, R. (1969). Basic principle of curriculum and instruction. Chicago & London: The University of Chicago press.  Back to cited text no. 18
    
19.
Vardhan, H. (2015). Assessment tools for dyslexia. Retrieved from http://nbrc.ac.in/ download/ post_ release_ writeup.pdf  Back to cited text no. 19
    


    Figures

  [Figure 1]
 
 
    Tables

  [Table 1], [Table 2]



 

Top
 
  Search
 
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
Abstract
Introduction
CIPP - Decision ...
CIPP- Core Concepts
The CIPP - Evalu...
CIPP - Collectin...
Application of C...
Conclusion
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed3937    
    Printed55    
    Emailed0    
    PDF Downloaded137    
    Comments [Add]    

Recommend this journal