Health Promotion Practice
March 2013 Vol. 14, No. 2 189–
198
DOI: 10.1177/1524839912437790
© 2012 Society for Public Health Education
189
Process evaluations are an often overlooked yet essential
component of health promotion interventions. This study
reports the results of a comprehensive process evaluation
for the “Comics for Health” program, a childhood obesity
prevention intervention implemented at 12 after-school
programs. Qualitative and quantitative process data
were collected using surveys, field notes, and open-item
questionnaires, which assessed program fidelity, dose
delivered, dose received, reach, recruitment, and context.
Triangulation of methods was also employed to better
understand how the program was implemented and
received by the facilitator, staff members, and children in
the program. Results indicated that program implementation
had an almost perfect rate of fidelity with most
lessons recording 100% tasks completed. Lessons were
implemented in their intended order and lasted approximately
30 minutes as planned. After-school staff members
reported that the program was well received by
children, and this program should be replicated in the
future. Attendance records showed that a majority of the
children attended each lesson on the initial day of delivery
(70.4%) and informal make-up lessons were implemented
to compensate for the other children. Finally,
several known sources of contamination were found
such as past and concurrent exposure to similar health
promotion interventions, which could potentially influence
study outcomes. These findings will be used to help
explain the results of this intervention and make recommendations
for future intervention efforts.
Keywords: behavior change theory; child/adolescent
health; process evaluation
>>Introduction
Early onset of obesity among children and adolescents
is a major public health concern, and there is great
interest in developing innovative and effective health
promotion interventions that can favorably influence
behaviors associated with its prevention. For such
interventions, program outcomes, such as a decrease in
overall body mass index (BMI) percentile or an increase
in specific health-related behaviors such as physical
activity, are typically used as barometers for defining
success. It is important to note however, that process
evaluations are also critical but not yet widely used or
appreciated as such (Saunders, Evans, & Joshi, 2005).
To illustrate, among five recently published literature
reviews and meta-analyses evaluating 87 unique childhood
obesity interventions spanning from 1966 to 2008
(Cook-Cottone, Casey, & Feeley, 2009; Gonzalez-Suarez,
Worley, Grimmer-Somers, & Dones, 2009; Kanekar &
Sharma, 2008-2009; Katz, O’Connell, Njike, Yeh, &
Nawaz, 2008; Shaya, Flores, Gbarayor, & Wang, 2008),
although BMI percentile was largely reported on as a
means for defining study success or failure, no review
437790HPPXXX10.1177/15248399124377
90Branscum et al.Health Promotion Practice
2012
1University of Oklahoma, Norman, OK, USA
2University of Cincinnati, Cincinnati, OH, USA
A Process Evaluation of a Social Cognitive
Theory–Based Childhood Obesity Prevention
Intervention: The Comics for Health Program
Paul Branscum, PhD, RD1
Manoj Sharma, PhD, MBBS2
Lihshing Leigh Wang, PhD2
Bradley Wilson, PhD2
Liliana Rojas-Guyler, PhD2
Authors’ Note: The authors would like to thank Nancy Brody
and all of the YMCA of Central Ohio staff for their support and
willingness to work together for this project.
190 HEALTH PROMOTION PRACTICE / March 2013
reported the number of studies that used process evaluations.
To shed some light on this factor, the authors of
this study reviewed the 87 studies and found that only
33 (or 40%) reported using at least one type of process
evaluation for their study protocol (Figure 1).
There are many types of process evaluations health
educators can use. When used together they can measure
various aspects of program implementation, such
as the who (who implemented and received the program),
what (what intervention components were
delivered), when (when were the intervention components
delivered), where (where did the intervention
take place), and how much (what was the length or
duration of the intervention) of the intervention in
question. In the line of research pertaining to the evaluation
of interventions targeting childhood obesity prevention,
it is often the case that interventions are
implemented by different individuals across multiple
locations, each of which could contribute bias or contamination
to the overall outcomes of the study. By
failing to monitor this and other program activities,
researchers run the risk of making what is formally
known as a Type III error, where weak or null results
can be attributed to poorly executed or incorrectly
implemented interventions (Windsor, Clark, Boyd, &
Goodman, 2004). Therefore, it is critical to employ a
standardized and comprehensive set of process evaluation
methodologies to capture this variability, in order
to describe these areas and assess whether the contamination
appears to have an impact on study outcomes.
Furthermore, monitoring the implementation of
an intervention helps researchers by enhancing their
ability to interpret findings and outcome measures
reported in their studies. For example, when researchers
are faced with negative or null outcomes for an
intervention, process evaluations can help distinguish
between an ineffective intervention (one that does not
produce the desired changes in behavior) and a poorly
executed intervention (one that incorrectly implemented,
thus making the outcome evaluation spurious).
Process evaluations can also help identify specific
programmatic activities that may be effective or ineffective,
which can provide guidance for future studies
(Saunders et al., 2005). For example, through a process
Cook-Cottone,
Casey, & Feeley,
2009
Reviewed articles
from 1997-2008
(n=40)
Kanekar, &
Sharma,
2008-2009
Reviewed articles
from 2000-2007
(n=5)
Shaya, Flores,
Gbarayor, &
Wang, 2008
Reviewed articles
from 1986–2006
(n=51)
Gonzalez-Suarez,
Worley,
Grimmer-Somers,
& Dones, 2009
Reviewed articles
from 1995-2007
(n=19)
Katz, O’Connell,
Njike, Yeh, &
Nawaz, 2008
Reviewed articles
from 1966-2004
(n=19)
96 Unique Studies
Excluded (n=9):
Not a primary prevention study: 5
Published in a language other than
English: 3
Was not a peer-reviewed study: 1
Included (n=87):
FIGURE 1 A Review of the Use of Process Evaluations From Childhood Obesity Prevention Programs
Branscum et al. / COMICS FOR HEALTH PROGRAM 191
evaluation an investigator may find that children enjoy
taste-testing foods that they have previously never been
exposed to, but they do not enjoy having group discussions
about the pros and cons of engaging in healthrelated
behaviors. Process evaluations are not only
important from a methodological standpoint but some
suggest that their underutilization may be one of the
major contributors to why obesity prevention programs
have produced mixed and modest results in recent
years (Thomas, 2006).
Among the available frameworks for process evaluations,
Saunders et al. (2005) outline a useful six-step
process for developing and using six types of process
evaluations that was deemed especially important for
obesity prevention programs. The steps of this framework
include fidelity (the extent to which the intervention
was delivered as planned), dose delivered (assurance
that program lessons were implemented in the intended
order and for the amount of time planned), dose
received (the extent to which the intervention was well
received by the participants), reach (or attendance),
recruitment (an assessment of what tasks were implemented
to approach and invite participants to be
involved with the study), and context (aspects of the
environment that have the potential to influence the
implementation of an intervention or study variables,
or possible contamination the comparison group might
have by being exposed to the experimental program).
Using this framework, this study reports the results for
a process evaluation for the “Comics for Health” program,
in an attempt to build on what little work has
been done in this area, and help researchers by sharing
practical advice to overcome barriers they may face in
their future studies. Triangulation of methods was also
employed to better understand how the program was
implemented, including vantage points from the program
facilitator, after-school staff members, and children
enrolled in the program. Results of the outcome evaluation
have been discussed elsewhere (Branscum, 2011).
>>Method
The “Comics for Health” program is a social cognitive
theory–based childhood obesity primary prevention
program. The methods, details of the intervention,
and outcome analyses have been detailed elsewhere
(Branscum, 2011). To summarize, this intervention
was tested against a knowledge-based obesity prevention
program on the effects of BMI percentile, key
obesity-related behaviors (fruit and vegetable consumption,
sugar-sweetened beverage consumption, physical
activity, and screen time engagement), and key constructs
of social cognitive theory (self-efficacy, selfcontrol,
and expectations) related to each behavior. Both
the theory- and knowledge-based programs consisted of
four lessons targeting behavioral recommendations set
forth by the American Medical Associations expert
committee regarding the prevention, assessment, and
treatment of child and adolescent overweight and obesity
(Barlow, 2007; Rao, 2008). For both programs, one
lesson focused on one specific lifestyle behavior. The
knowledge-based intervention chose program activities
to mediate behavior change solely based on building
awareness and knowledge, such as being aware of the
recommended number of servings of fruits and vegetables,
and defining the term physical activity. The theorybased
intervention used theory-oriented program
activities to mediate behavior change such as taking
small achievable steps for learning and mastering new
skills, and participating in role-plays to practice new
skills and behaviors in pretend setting with either a
peer or parent. Both interventions also included aspects
of making and reading comic books, and included
activities to help children create their own original
comic book or strip. “Comics for Health” was evaluated
using a group randomized controlled design, whereby
12 YMCA-sponsored after-school programs were randomized
to either the theory-based (n = 6 with 37 children)
or knowledge-based (n = 6 with 34 children)
group. Approval from the sponsoring university was
obtained before data collection began.
Recruitment
Recruitment procedures were consistent at each site,
as controlled by the program facilitator. The benefit of
working with a licensed after-school care provider,
such as the YMCA, was that parents were required to
be physically present when picking up their children.
Therefore, during first few weeks of the study the program
facilitator was able to approach parents of potential
participants and explain the details of the study in
order to collect parent permission forms. This process
was replicated and virtually identical at each site, making
it apparent that the potential for bias in participation
rates among the various sites was very low.
Program Fidelity
To evaluate the fidelity between the planned and
actual implementation of the intervention for each of
the eight sessions (four theory- and four knowledge
based), structured tally sheets were first created. Each
form listed the major objectives or tasks the program
facilitator was to complete for the corresponding lesson,
and in the instructions, the observer was asked to
record each objective as either being adequately completed
(scored as 1) or not completed (scored as 0).
192 HEALTH PROMOTION PRACTICE / March 2013
Overall, lessons ranged from having 18 to 29 total tasks.
The second step for creating the structured tally sheets
was to establish their face and content validity, and
readability. This was achieved by recruiting a panel of
six experts (five university professors and one director
from the after-school program) to simultaneously compare
them to a detailed description of each lesson plan.
This process included two rounds of review: In the first
round, experts gave initial suggestions for improvements,
and in the second round, they evaluated the
revised tally sheets and gave any final suggestions.
Once the tally sheets were created and validated, they
were deemed appropriate for use in the intervention. In
all, two tally sheets were completed for each lesson at
each site, to assure program delivery was successful
from two separate vantage points. For each lesson, an
after-school staff member observed the program and
completed the corresponding tally sheet, and concurrently
the program facilitator completed a separate tally
sheet as a self-check.
Program Dose
The intended dose for both the theory- and knowledgebased
programs was four lessons, each lasting 30 minutes
in length. Two separate elements of dose were
evaluated for this study: dose delivered and dose
received. To evaluate dose delivered, the program facilitator
kept field notes to assure that each after-school
program received each lesson in the appropriate order.
The program facilitator also used a stopwatch to track
the amount of time taken to implement each lesson at
each after-school program, as a means of assuring program
activities were delivered in a timely fashion. An
analysis of variance (ANOVA) was then conducted to
compare the amount of time between after-school sites
(n = 12 sites), intervention conditions (n = 2 interventions),
and an interaction between the sites and conditions,
to assure equivalency among sites and treatment
conditions. To evaluate the dose received, after-school
staff members present during the implementation of the
program completed a questionnaire containing openended
items pertaining to the program’s feasibility (e.g.,
What were your opinions about the timing of the program?)
and acceptability by their children (What benefits
do you perceive the children got from participating
in the program?).
Program Reach
To evaluate program reach, attendance was recorded
for each lesson at each site by an after-school staff
member. Children who did not participate in the initial
sessions were tracked and given informal make-up sessions
to assure they participated in all program activities.
To evaluate whether attendance was equivalent
between both intervention groups a chi-square test was
used. All data were analyzed using Predictive Analytical
Software (PASW) version 18.
Context
The context of both programs was controlled and
evaluated in a number of ways. First, we controlled the
context by having the same program facilitator implement
every lesson for both programs. This was done to
give children from all 12 after-school sites an almost
identical experience, given that there was no difference
in teaching style, no variation in personalities, and no
preexisting relationship between any of the children
and the program facilitator. Additionally, the program
facilitator was very familiar with the intervention,
since he was the primary author. Therefore, there was
no need for formal training of an implementation staff,
which could have been another source of potential bias.
Context was further evaluated in two ways. First, the
program facilitator, using field notes, documented the
presence of any competing or similar programs implemented
during the course of the study that could have
introduced bias to any outcome measures. Second, during
the pretesting of both interventions children were
asked to report the number of times they were taught
about healthy eating and the number of times they were
taught about the importance of physical activity at
home. This was hypothesized to be important since
children who have repeated exposure to health promotion
interventions are likely to be more susceptible
to changing their behaviors. To assure equivalence
between both groups for both variables, two separate
ANOVAs were used.
>>Results
Program Fidelity
Percentages of the amount of tasks completed for
each lesson for both groups are shown in Table 1. From
Table 1, it was evident that both programs were implemented
near perfect (100%) at each site. There were
four instances, however, when the program was not
recorded as perfectly implemented (100%) by one or
both program evaluators. For Site 6 in the experimental
group, the program facilitator and the after-school staff
member both reported that 89% of Lesson 3 and 88%
of Lesson 4 was implemented. For Site 12 in the control
group, there was a discrepancy between the program
Branscum et al. / COMICS FOR HEALTH PROGRAM 193
Ta ble 1
Degree of Program Fidelity for Afterschool Programs in the Experimental (Theory-Based) and Comparison
(Knowledge-Based) Interventions
Group
After-
School
Program Observer
Lesson
1 (%)
Lesson
2 (%)
Lesson
3 (%)
Lesson
4 (%)
Average
Exposure (%)
Performance
Standard (%)
Program
Implementation
Index
Experimental 1 After-school worker 100 100 100 100 100 95 1.05
Program facilitator 100 100 100 100 100 95 1.05
2 After-school worker 100 100 100 100 100 95 1.05
Program facilitator 100 100 100 100 100 95 1.05
3 After-school worker 100 100 100 100 100 95 1.05
Program facilitator 100 100 100 100 100 95 1.05
4 After-school worker 100 100 100 100 100 95 1.05
Program facilitator 100 100 100 100 100 95 1.05
5 After-school worker 100 100 100 100 100 95 1.05
Program facilitator 100 100 100 100 100 95 1.05
6 After-school worker 100 100 89 88 94.25 95 0.992
Program facilitator 100 100 89 88 94.25 95 0.992
Comparison 7 After-school worker 100 100 100 100 100 95 1.05
Program facilitator 100 100 100 100 100 95 1.05
8 After-school worker 100 100 100 100 100 95 1.05
Program facilitator 100 100 100 100 100 95 1.05
9 After-school worker 100 100 100 100 100 95 1.05
Program facilitator 100 100 100 100 100 95 1.05
10 After-school worker 100 100 100 100 100 95 1.05
Program facilitator 100 100 100 100 100 95 1.05
11 After-school worker 100 100 100 100 100 95 1.05
Program facilitator 100 100 100 100 100 95 1.05
12 After-school worker 70 95 100 100 91.25 95 0.961
Program facilitator 100 100 100 100 100 95 1.05
evaluators: Whereas the program facilitator reported
that all lessons were perfectly implemented, the afterschool
staff member reported that only 70% of Lesson 1
was delivered and 95% of Lesson 2 was delivered. It
was difficult to assess the reasoning behind this discrepancy,
however, it was likely the case that the afterschool
staff member was distracted during the program
and either missed some of the tasks, or they were
unwilling to record the completion of the tasks. In all
of these cases however, lessons were implemented
close to 90%, indicating that only three or four tasks
were not completed when less than perfect.
Program Dose
The amount of time taken to implement each lesson
and a comparison between groups (theory based and
knowledge based), sessions (Sessions 1 through 4), and
the interaction between the two are presented in Table 2.
From Table 2, it was evident that the actual amount of
time taken to implement each lesson was very close to
the planned 30 minutes, and there was no apparent difference
between groups and sessions. The program
facilitator also recorded that each of the 12 after-school
sites received all four lessons of their program in their
intended order.
Regarding the feasibility and acceptability of both
programs, it is important to note that after-school staff
members were initially blinded from knowing which
program their site received. However, since both the
theory- and knowledge-based programs were alike in
several ways, feedback received from staff members
was similar. For example, many stated that the length
of each lesson (30 minutes) and the entire length of the
194 HEALTH PROMOTION PRACTICE / March 2013
program (4 weeks) were both appropriate and desirable
for their after-school programs. They also commented
that the four behaviors targeted during the program
were important and relevant to their children.
This was a good legth because it wasn’t too short or
too long of a time period.
I think it was a great length of four weeks that kept
the children interested.
I think the children were interested because they
were topics the kids could relate with.
Staff members also reported that their children
enjoyed the idea of designing and creating their own
comic books, which was an overall goal for both programs.
By having the opportunity to create an original
comic book, staff members reported that this sparked
the interest of some children who had no initial interest,
and in some cases, enhanced those students’ abilities
who had an initial interest, but no experience with
creating comic books. Given the brevity of the program,
however, staff members also cited that many of
their children did not have enough time to fully
develop and finish their comic book, and if given additional
time, they would have enjoyed more direction
for this activity.
I don’t think they really got to explore the whole
comic thing much.
Last, staff members commented on their perceived
barriers for the implementation of the program. Namely,
competing activities, such as sports teams, made it impossible
for some children to attend everyday; surrounding
children not enrolled in the program created an atmosphere
that was disruptive at times; and although a majority
of the children were interested and engaged in the
program, not all of them wanted to make a comic book.
In response to perceived barriers of the program:
Not having a quiet enough area where the kids could
focus.
Weather and kids missing or leaving at varying times.
Reach
Table 3 shows the record for attendance on the initial
day of program implementation. From Table 3, it
was evident that a majority of children from both
groups attended the entirety of the program as originally
intended, and some required make-up sessions.
Results from a chi-square test also suggested that there
was no significant difference between groups for the
amount of lessons attended.
Context
As documented using field notes by the program
facilitator, it was apparent that many children were
Ta ble 2
Total Time for Implementing Each Lesson for Both Interventions
After-School
Program
Session Mean
(SD) (Minutes)
After-School
Program
Session Mean
(SD) (Minutes)
Experimentala,b
(knowledge
based)
1
2
3
4
5
6
31.0 (1.41)
31.25 (0.95)
31.25 (2.06)
30.75 (0.96)
31.25 (1.89)
30.0 (1.83)
Comparisona,b
(knowledge based)
7
8
9
10
11
12
30.25 (2.06)
30.75 (0.96)
31.0 (1.41)
30.5 (1.73)
31.0 (1.41)
30.25 (0.96)
Group averageb,c 30.92 (1.47) 30.63 (1.35)
a. p value for between sessions (p > .473).
b. p value for interaction for sessions and groups (p > .316).
c. p value for between groups (p > .477).
Branscum et al. / COMICS FOR HEALTH PROGRAM 195
already familiar with messages and concepts presented
by the program. For example, during the lesson pertaining
to fruit and vegetable consumption, without
cues from the facilitator many children were able to
identify various types of fruits and vegetables, and
recall the recommended daily amount for consumption.
It was later found that their high level of preexisting
knowledge could have come from a variety of
sources. For example, parental influence was apparently
high in this group. To illustrate, in the lesson
targeting screen time, although the objective of the lesson
was to promote the reduction of screen time to no
more than 2 hours per day, many children from both
programs reported that their parents already controlled
the amount of screen time they were allowed to have
each day, and in many cases, it was less than 2 hours.
Also, during the lesson targeting the reduction of sugarsweetened
beverages, many children reported that their
parents highly controlled the beverages that they were
allowed to consume, and many reported that they were
not permitted to consume any type of carbonated beverage.
Children may also have been already familiar
with the concepts presented during the intervention
because of the number of times they reported participating
in previous health promotion programs. To illustrate,
at the time of pretesting children in the theory-based
group reported 1.89 previous exposures (σ = 1.28) to
programs promoting healthy eating whereas children
in the knowledge-based group reported 2.43 previous
exposures (σ = 0.97). For previous exposure to programs
promoting physical activity, children in the
theory-based group reported 2.17 previous exposures
(σ = 1.08), and children in the knowledge-based group
reported 2.35 previous exposures (σ = 0.1.05). Results
from two separate ANOVAs indicated that there were
no differences between groups for participation in
either type of program (healthy eating, p = .06; physical
activity, p = .55).
The program facilitator also observed and recorded two
competing programs with similar goals and objectives
that were implemented concurrently to this intervention.
Half way through this intervention, the program
Jump Rope for Heart, a fund-raising program sponsored
by both the American Alliance for Health, Physical
Education, Recreation and Dance and American Heart
Association, which encourages children to be physically
active by jumping rope, was initiated in all elementary
schools used in this study. Jump Rope for Heart
was implemented school-wide, and it was likely reinforced
by school principals, teachers, and their own
friends. Parents were also informed of the program and
asked to help children raise money for the sponsoring
organizations. Another competing program was the
YMCA-created program Y-Kids Are Fit. This is not a
standard program containing a series of lessons, however;
instead, Y-Kids Are Fit consisted of various games
and activities the after-school staff members were
encouraged to use in order to increase the amount of
physical activities children engage in while in the program.
It was also not mandated by the YMCA, and afterschool
staff members implemented the program at their
own discretion. Hence, the program was implemented
differently at each site, making it extremely difficult to
evaluate how much and to what extent children were
exposed to the program.
Finally, toward the end of the intervention, the program
facilitator found that there may have been contamination
from after-school staff member at some
sites, most notably from the comparison condition.
Since the comparison intervention was a knowledgebased
program, and staff members were unaware of
which program children were receiving, they may
have perceived the intervention as weak and attempted
to reinforce the health messages in an attempt to further
enhance the program. This was observed once during
the study at a comparison site during the lesson
Ta ble 3
Attendance of Children for Both Intervention Groups
No. of Lessons Attended
Experimental
Group; n (%)
Comparison Group;
n (%)
Overall;
n (%) χ2 (df) p
4 27 (73.0) 23 (67.7) 50 (70.4) 0.305 (2) .859
3 6 (16.2) 6 (17.6) 12 (16.9)
2 4 (10.8) 5 (14.7) 9 (12.7)
Total 37 (100) 34 (100) 71 (100)
196 HEALTH PROMOTION PRACTICE / March 2013
targeting sugar-sweetened beverages. The same afterschool
staff member reported, with regards to teaching
the children about the health topics and comic books,
that the program
. . . could be much more extensive.
It was, however, unknown how much and to what
extent reinforcement was given at each site by staff
members, since this was not apparent until the end of
the program.
>>Discussion
The results in this article present an overview of the
comprehensive process evaluation implemented for a
social cognitive theory–based and a knowledge-based
program for the prevention of childhood obesity. After
reading this article, we hope we have made the case
that including multiple aspects of process evaluations
for health programs are needed because it gives
researchers and others an opportunity to view program
implementation from more than one vantage point. For
example, if fidelity and dose were the only process
evaluations implemented, it may have been uncertain
for whether the program was acceptable by the afterschool
staff or well received by the target audience.
Although there appeared to be an overall high degree
of fidelity for both programs, there was some discrepancy
between the implementation for both groups.
Whereas this may raise some concern, it is important to
note that since this discrepancy was observed for two
lessons in both treatment groups, and any potential
bias this created was likely shared equally between
both groups. As Durlak and DuPre (2008) note, it is
unrealistic for health promotion interventions to expect
perfect (100%) or even near-perfect implementation.
Windsor and colleagues (2004), describe the computation
of a Program Implementation Index (PII), by which
actual implementation (A) of a program can be divided
by an a priori–expected performance standard (D),
which can help interpret the adequacy of the implementation
for a program. Although no standard currently
exists for an appropriate PII, they note that a PII
of ≥90% would indicate an excellent level of implementation.
Using these criteria, Table 1 presents the PII
for every lesson for both programs. From Table 1, it was
evident that all lessons yielded an acceptable PII level,
regardless of observer. It is also important to note that
this study used both observational and self-report
measures for program fidelity. A potential weakness to
using this method was that after-school staff members
were not formally trained to complete such a task, and
the use of trained research personnel would have been
stronger. This was done largely because of financial
constraints, in that there was no money available to
hire additional personnel, and partially, for practical
reasons, in that after-school staff members were required
to be with their children at all times because of child
care licensing laws. Staff members also did not report
any difficulties implementing the process evaluations.
For future studies, this may be an acceptable approach
that other researchers can use if faced with similar
financial and personnel constraints. This also may be an
appropriate first step for having the after-school staff ultimately
implement the lessons themselves. During the
first year, they could observe the program, which in turn
prepares them for implementation in the second year.
Another way the fidelity between the perfectly implemented
groups and the less than perfectly implemented
groups was evaluated was by measuring the differences
between groups using an ANOVA for all study variables
targeted in the lessons in question. For example, among
the treatment group’s Sites 1 through 5 were considered
perfect for all lessons, and Site 6 was considered
less than perfect for Lessons 3 and 4, which targeted
physical activity and fruit and vegetable consumption,
respectively. Using separate repeated measures
ANOVAs, we found that there was no difference between
these two groups for any study variables related to these
behaviors. This was again repeated for the control sites:
Sites 7 through 11 were considered perfect, and Site 12
was considered less than perfect for Lessons 1 and 2,
which targeted screen time and sugar-sweetened
beverage consumption, respectively. Separate repeated
measures ANOVAs were used between groups for all
study variables related to these behaviors, and again, no
significant differences were found between groups.
Each lesson was implemented very closely to the
planned 30-minute goal, and the program appeared to be
well received by the after-school staff members and the
children. In theory, if a difference was found in the
amount of time between sites, a statistical comparison
between the two groups (as previously mentioned with
fidelity) would be warranted. The results presented in
(Branscum, 2011) are strengthened since timing was constant
among all sites. For dose received, while we evaluated
the program from the after-school workers’ vantage
point, this would have been further enhanced by evaluating
the children’s beliefs and attitudes directly. In practice,
this could have been done using either focus groups
or questionnaires with closed- and open-ended items that
were similar to those completed by the after-school staff.
Parents were also a missing component to this process
evaluation, which would have been extremely helpful in
Branscum et al. / COMICS FOR HEALTH PROGRAM 197
determining their attitudes toward the program and ways
they could have been included in the intervention.
With regard to reach, attendance was not perfect for
both groups. This was expected, given the sporadic
nature of after-school programming. However, to circumvent
this problem, we did implement informal
make-up sessions to children who were not present at
all of the lessons to assure that they were all exposed to
program in its entirety. On one hand, it would have
been ideal to give formal make-up sessions to these
children, however, we were not sure how practical this
could have been. For example, at some sites, only one
child missed one of the lessons, and we were not sure
how effective it would have been to formally implement
an entire lesson with one child. Additionally, the
costs of doing this at multiple sites for multiple lessons
would have also been high and likely prohibitive. In
practice, it should be expected that unless under very
controlled circumstances, when dealing with members
of the community attendance would almost never be
perfect. Therefore, this should be anticipated before the
start of the intervention, and a protocol for dealing with
attendance should be addressed. Given the brevity of
the program, and relatively small sample size we were
able to obtain, we decided to keep all of the children in
the study regardless of initial attendance. Options that
other researchers have include giving formal make-up
sessions or retaining only participants who achieve a
100% attendance record. In theory, this could also be
evaluated statistically, by either categorizing this variable,
evaluating perfect attendees versus less than perfect
attendees, or researchers could keep attendance as
a continuous variable, and evaluate whether attendance
predicts better health outcomes.
It was somewhat surprising to find so many potential
areas for contamination. Childhood obesity is a pressing
issue in today’s society, and many interventions
sponsored by various organizations are currently being
implemented to address this problem. When designing
efficacy trials such as those presented in this article, it
may be important for researchers to evaluate the presence
of additional programs currently being implemented
and those scheduled in subsequent months at the
selected venue. If programs with similar goals and
objectives are to be implemented, in practice, researchers
must decide to (a) find an alternative venue with no
competing programs, (b) ask the venue to refrain from
participating in outside programs, or (c) implement the
program and report the possibility of contamination.
This is a similar issue with regard to the after-school
personnel who reinforced program messages to their
children. Although this practice is ideal and ultimately
needed to help enhance programs’ effectiveness, if the
reinforcement is different among intervention sites,
then the potential for contamination is high, which
may again contribute to biased outcomes. Similarly
then, it may be important for researchers to address this
issue before the intervention and ask personnel to
either (a) refrain from any reinforcement or (b) follow a
standardized reinforcement protocol. If the latter is
chosen, then additional process evaluations should be
used to assure the protocol is in full adherence. As
previously mentioned, theoretically, if process evaluations
find that some programs are affected differently
than others, then this could serve as a potential covariate
and should be tested as such to determine if the
contamination made a significant impact on outcome
measures. In this study, there was no need to test the
significance of possible contaminators, since all programs
were affected similarly by outside programs.
As Young et al. (2008) report, to move forward in
this area of research, researchers should use lessons
learned from previous studies. We have presented methods
that other researchers can utilize in the planning
and implementation of future process evaluations. The
findings presented here bring into light important factors
that researchers should consider when implementing
evaluation methods.
References
Barlow, S. E. (2007). Expert committee recommendations regarding
the prevention, assessment, and treatment of child and adolescent
overweight and obesity: Summary report. Pediatrics,
120(Suppl. 4), S164-S192.
Branscum, P. (2011). Designing and evaluating an after-school
social cognitive theory based comic book intervention for the
prevention of childhood obesity among elementary aged school
children. Retrieved from OhioLINK ETD Center. (Document number:
ucin1311775201)
Cook-Cottone, C., Casey, C. M., & Feeley, T. H. (2009). A metaanalytic
review of obesity prevention in the schools: 1997-2008.
Psychology in the Schools, 46, 695-719.
Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A
review of research on the influence of implementation on program
outcomes and the factors affecting implementation. American
Journal of Community Psychology, 41, 327-350.
Gonzalez-Suarez, C., Worley, A., Grimmer-Somers, K., & Dones, V.
(2009). School-based interventions on childhood obesity: A metaanalysis.
American Journal of Preventative Medicine, 37, 418-427.
Kanekar, A., & Sharma, M. (2008-2009). Meta-analysis of schoolbased
childhood obesity intervention in the U.K and U.S. International
Quarterly of Community Health Education, 29, 241-256.
Katz, D. L., O’Connell, M., Njike, V. Y., Yeh, M. C., & Nawaz, H.
(2008). Strategies for the prevention and control of obesity in the
school setting: A systematic review and meta-analysis. International
Journal of Obesity, 32, 1780-1789.
Rao, G. (2008). Childhood obesity: Highlights of AMA expert committee
recommendations. American Family Physicians, 78, 56-63.
198 HEALTH PROMOTION PRACTICE / March 2013
Saunders, R. P., Evans, M. H., & Joshi, P. (2005). Developing a process-
evaluation plan for assessing health promotion program implementation:
A how-to guide. Health Promotion Practice, 6, 134-147.
Shaya, F. T., Flores, D., Gbarayor, C. M., & Wang, J. (2008). Schoolbased
obesity interventions: A literature review. Journal of School
Health, 78, 189-196.
Thomas, H. (2006). Obesity prevention programs for children and
youth: Why are their results so modest? Health Education Research,
21, 783-795.
Windsor, R., Clark, N., Boyd, N. R., & Goodman, R. M. (2004).
Evaluation of health promotion, health education, and disease
prevention (3rd ed.). New York, NY: McGraw-Hill.
Young, D. R., Steckler, A., Cohen, S., Pratt, C., Felton, G., Moe,
S. G., . . . Raburn, B. (2008). Process evaluation results from a
school- and community-linked intervention: The Trial of Activity
for Adolescent Girls (TAAG). Health Education Research, 23,
976–986.
Are you busy and do not have time to handle your assignment? Are you scared that your paper will not make the grade? Do you have responsibilities that may hinder you from turning in your assignment on time? Are you tired and can barely handle your assignment? Are your grades inconsistent?
Whichever your reason is, it is valid! You can get professional academic help from our service at affordable rates. We have a team of professional academic writers who can handle all your assignments.
Students barely have time to read. We got you! Have your literature essay or book review written without having the hassle of reading the book. You can get your literature paper custom-written for you by our literature specialists.
Do you struggle with finance? No need to torture yourself if finance is not your cup of tea. You can order your finance paper from our academic writing service and get 100% original work from competent finance experts.
Computer science is a tough subject. Fortunately, our computer science experts are up to the match. No need to stress and have sleepless nights. Our academic writers will tackle all your computer science assignments and deliver them on time. Let us handle all your python, java, ruby, JavaScript, php , C+ assignments!
While psychology may be an interesting subject, you may lack sufficient time to handle your assignments. Don’t despair; by using our academic writing service, you can be assured of perfect grades. Moreover, your grades will be consistent.
Engineering is quite a demanding subject. Students face a lot of pressure and barely have enough time to do what they love to do. Our academic writing service got you covered! Our engineering specialists follow the paper instructions and ensure timely delivery of the paper.
In the nursing course, you may have difficulties with literature reviews, annotated bibliographies, critical essays, and other assignments. Our nursing assignment writers will offer you professional nursing paper help at low prices.
Truth be told, sociology papers can be quite exhausting. Our academic writing service relieves you of fatigue, pressure, and stress. You can relax and have peace of mind as our academic writers handle your sociology assignment.
We take pride in having some of the best business writers in the industry. Our business writers have a lot of experience in the field. They are reliable, and you can be assured of a high-grade paper. They are able to handle business papers of any subject, length, deadline, and difficulty!
We boast of having some of the most experienced statistics experts in the industry. Our statistics experts have diverse skills, expertise, and knowledge to handle any kind of assignment. They have access to all kinds of software to get your assignment done.
Writing a law essay may prove to be an insurmountable obstacle, especially when you need to know the peculiarities of the legislative framework. Take advantage of our top-notch law specialists and get superb grades and 100% satisfaction.
We have highlighted some of the most popular subjects we handle above. Those are just a tip of the iceberg. We deal in all academic disciplines since our writers are as diverse. They have been drawn from across all disciplines, and orders are assigned to those writers believed to be the best in the field. In a nutshell, there is no task we cannot handle; all you need to do is place your order with us. As long as your instructions are clear, just trust we shall deliver irrespective of the discipline.
Our essay writers are graduates with bachelor's, masters, Ph.D., and doctorate degrees in various subjects. The minimum requirement to be an essay writer with our essay writing service is to have a college degree. All our academic writers have a minimum of two years of academic writing. We have a stringent recruitment process to ensure that we get only the most competent essay writers in the industry. We also ensure that the writers are handsomely compensated for their value. The majority of our writers are native English speakers. As such, the fluency of language and grammar is impeccable.
There is a very low likelihood that you won’t like the paper.
Not at all. All papers are written from scratch. There is no way your tutor or instructor will realize that you did not write the paper yourself. In fact, we recommend using our assignment help services for consistent results.
We check all papers for plagiarism before we submit them. We use powerful plagiarism checking software such as SafeAssign, LopesWrite, and Turnitin. We also upload the plagiarism report so that you can review it. We understand that plagiarism is academic suicide. We would not take the risk of submitting plagiarized work and jeopardize your academic journey. Furthermore, we do not sell or use prewritten papers, and each paper is written from scratch.
You determine when you get the paper by setting the deadline when placing the order. All papers are delivered within the deadline. We are well aware that we operate in a time-sensitive industry. As such, we have laid out strategies to ensure that the client receives the paper on time and they never miss the deadline. We understand that papers that are submitted late have some points deducted. We do not want you to miss any points due to late submission. We work on beating deadlines by huge margins in order to ensure that you have ample time to review the paper before you submit it.
We have a privacy and confidentiality policy that guides our work. We NEVER share any customer information with third parties. Noone will ever know that you used our assignment help services. It’s only between you and us. We are bound by our policies to protect the customer’s identity and information. All your information, such as your names, phone number, email, order information, and so on, are protected. We have robust security systems that ensure that your data is protected. Hacking our systems is close to impossible, and it has never happened.
You fill all the paper instructions in the order form. Make sure you include all the helpful materials so that our academic writers can deliver the perfect paper. It will also help to eliminate unnecessary revisions.
Proceed to pay for the paper so that it can be assigned to one of our expert academic writers. The paper subject is matched with the writer’s area of specialization.
You communicate with the writer and know about the progress of the paper. The client can ask the writer for drafts of the paper. The client can upload extra material and include additional instructions from the lecturer. Receive a paper.
The paper is sent to your email and uploaded to your personal account. You also get a plagiarism report attached to your paper.
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.
Read moreEach paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.
Read moreThanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.
Read moreYour email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.
Read moreBy sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.
Read more