+1 (218) 451-4151 info@writersnest.org

choose any article from any of the journals listed within the article.
https://mcr.sagepub.com/
Medical Care Research and Review
https://mcr.sagepub.com/content/59/3/337
The online version of this article can be found at:
DOI: 10.1177/1077558702059003006
Med Care Res Rev 2002 59: 337
Eric S. Williams, Randall T. Stewart, Stephen O’Connor, Grant T. Savage and Richard Shewchuk
Rating Outlets for Health Care Management Research: An Update and Extension
Published by:
https://www.sagepublications.com
Additional services and information for Medical Care Research and Review can be found at:
Email Alerts: https://mcr.sagepub.com/cgi/alerts
Subscriptions: https://mcr.sagepub.com/subscriptions
Reprints: https://www.sagepub.com/journalsReprints.nav
Permissions: https://www.sagepub.com/journalsPermissions.nav
Citations: https://mcr.sagepub.com/content/59/3/337.refs.html
What is This?
>> Version of Record – Sep 1, 2002
Downloaded from mcr.sagepub.com at WALDEN UNIVERSITY on October 11, 2012
MWiCllRia&mRs 5e9t :a3l .( S/e pHteeamltbhe Cr 2a0re0 2M) anagement Research
Rating Outlets for
Health Care Management Research:
An Update and Extension
Eric S. Williams
Randall T. Stewart
University of Alabama
Stephen O’Connor
University of Alabama at Birmingham
Grant T. Savage
University of Alabama
Richard Shewchuk
University of Alabama at Birmingham
The ongoing discussion of our intellectual community requires that occasionally an effort
be made to value the outlets for our research and erect guideposts for our colleagues to signal
important contributions to our discipline. This study extends previous work through
a survey sent to 1,254 academics involved in health care management research that asked
them to rate 54 potential outlets for their research. Ratings were made on journal knowledge,
quality, and relevance. Two survey waves resulted in 389 responses (adjusted
response rate 37.8 percent). For quality and relevance, journal rankings were separated
into A, B, and C categories. The results correlated strongly with the two previous studies
in this area. This study extends previous research and provides a categorization of journals
on knowledge, quality, and relevance that may assist in faculty performance evaluation
and identification of appropriate outlets for manuscripts.
This research was supported through the generosity of the University of Alabama Management
and Marketing Department. The authors would like to thank Elizabeth Floyd for her assistance
with data entry.We would like to extend special thanks to all of our health care management colleagues
who participated in this study. This article, submitted to Medical Care Research and Review
on May 11, 2001, was revised and accepted for publication on October 8, 2001.
Medical Care Research and Review, Vol. 59 No. 3, (September 2002) 337-352
© 2002 Sage Publications
337
Downloaded from mcr.sagepub.com at WALDEN UNIVERSITY on October 11, 2012
The intellectual community studying the field of health care management
has engaged in a continuous discussion across many years, employing the
journal article as its chief currency. The maintenance of the quality and health
of this exchange requires two things among others: first, that the value of the
currency be easily ascertained and, second, that members of this community
know where this discussion takes place. Occasionally, members of this community
engage in an effort to address both of these issues.
Twosuch previous studies have addressed these issues through rating publication
outlets for health care management research. Brooks, Walker, and
Szorady (1991) collected quality ratings on 53 journals from48 chairpersons of
accredited health administration programs. Using a nonparametric technique,
six tiers of journals were identified. In contrast, McCracken and Coffey
(1996) focused on “business-oriented” health care management academics.
They derived a list of 31 journals and asked health administration faculty from
business schools to rate journals on perceived quality and relevance. The 164
responses were analyzed using the same nonparametric technique employed
by Brooks,Walker, and Szorady. Five tiers of journals were identified on both
quality and relevance. Overall, McCracken and Coffey’s findings strongly
correlate with those of Brooks, Walker, and Szorady.
NEW CONTRIBUTION
The authors of both previous works (Brooks, Walker, and Szorady 1991;
McCracken and Coffey 1996) expressed the sentiment that periodically some
effort bemadeto assess journal rankings in the health care management arena.
Furthermore, they both emphasized the importance of exploring how perceptions
of a journal may change across time. Because of such concerns, an up-todate
study that values the publication outlets for our research and guides
members toward important contributions to our discipline is highly relevant.
This study builds on and extends the previous literature in two ways. First, it
incorporates a larger, more diverse sample of academics in health care management.
Second, ratings of respondent journal knowledge are handled more
formally than in previous research.
METHOD
SAMPLE
This study used two samples. The first sample comprised the entire domestic
membership (417) of the Academy of Management’s Health Care Management
Division (HCMD), a sample that was also used by McCracken and
338 MCR&R 59:3 (September 2002)
Downloaded from mcr.sagepub.com at WALDEN UNIVERSITY on October 11, 2012
Coffey (1996). This group represents academics who profess a research interest
in the health care management field. The second sample came from the
membership of the Association of University Programs in Health Administration
(AUPHA). Brooks, Walker, and Szorady (1991) surveyed chairs of its
accredited programs, and McCracken and Coffey surveyed a subset of business
school–employed AUPHA program faculty. The AUPHA represents a
more diverse set of health care disciplines than the HCMD sample. Our sample
included 837 AUPHA members who held both doctoral degrees (Ph.D.,
Dr.P.H., J.D., M.D., Ed.D.) and tenure-track faculty ranking (assistant, associate,
or full professor) who were not members of HCMD. When combined, the
total sample contained 1,254 academics.
PROCEDURE
A four-page survey was printed in booklet form. The package that was
mailed included the survey, a business reply envelope, and a cover letter
explaining the purpose of the study and offering confidentiality. Reminder email
was sent to nonrespondents 3 weeks after the initial mailing. It included a
URL for downloading the survey. A second wave of surveys was mailed 2
months after the first mailing. Three weeks after that, a final reminder e-mail
was sent to nonrespondents.
The list of health care management journals we used drew on the work of
Brooks,Walker, and Szorady (1991), which included a list of 53 journals. This
list was supplemented by a new journal recommended for inclusion by
McCracken and Coffey (1996). Three journals were moved fromthe combined
list: two were no longer published (Dimensions in Health Services and Hospital
Forum) and one whose editorial direction had changed substantially (Journal of
Long-Term Care Administration). After updating changed journal names and
adding threenewhealth care management journalsknownto us,wecompiled
a final list of 54 journals.
JOURNAL RATING SURVEY
The survey collected demographic data and journal ratings. The demographic
information included age, gender, academic rank, tenure status, location
of primary appointment, number of publications, and primary area of
health care expertise. The second section asked respondents to make up to
three ratings for each journal. Respondents first rated their perceived level of
knowledge. A5-point scale was used with anchors of 1 = no knowledge, 2 = recognize
journal, but knowlittle about content, 3 = recognize journal and knowcontent ,
4=read occasionally, and5=read regularly, reviewfor, or published in. After rating
Williams et al. / Health Care Management Research 339
Downloaded from mcr.sagepub.com at WALDEN UNIVERSITY on October 11, 2012
knowledge, respondents could also rate perceived quality and relevance.
However, respondents were asked to do so only if they had rated their knowledge
of the journal as 3 (recognize journal and knowcontent ) or greater.We reasoned
that if a rater knew the content of a journal, he or she was assumed to
have sufficient knowledge of that journal to make a reliable rating. Our procedure
formalizes the informal process used in previous research and provides
additional information about respondents’ level of knowledge of specific
journals.
The ratings of quality and relevance utilized measures originally developed
for McCracken and Coffey (1996). Specifically, the quality rating question
asked, “Based on the articles published in this journal, how would you
rate the journal’s quality?” The relevance question asked, “How relevant do
you consider this journal as an outlet for health care management research?”
Observations of how academics informally rate journals form the basis for the
response scale for both measures. Journals are frequently referred to as A, B, or
Cjournals. Occasionally, pluses or minutes are added to create informal ratings
such as an A– journal or a C+ journal. Taking advantage of this, a 9-point
response scale was created with the lowest ranking being C– and the highest
ranking being A+.
ANALYSIS
The analyses used the same methodology as the two previous studies
(Brooks, Walker, and Szorady 1991; McCracken and Coffey 1996) and as
described in Coe and Weinstock (1984). First, each journal was ranked in
descending order according to its mean rating. Then, statistically distinct tiers
were formed fromthe 54 journals using the chi-square goodness-of-fit method
for a multinomial experiment (Keller, Warrick, and Bartell 1988). Brooks,
Walker, and Szorady (1991) argued that the use of this nonparametric technique
was justified by the decidedly skewed distribution of journal ratings
that would bias any parametric statistic assuming a normal distribution.Atier
was defined as a grouping of at least 4 journals having similar rating distributions
with respect to the 9-point scale. Thus, this process began with the aggregate
rating distribution of the top 4 journals serving as the standard to which
the rating distribution of the 5th journal (and each subsequent journal) was
compared for admission to or rejection from the tier. Asignificant chi-square
value (at alpha of .05) meant that the distribution of the candidate journal was
statistically different from that of the aggregate distribution of the top 4 journals
and should be rejected. Alternatively, a nonsignificant chi-square meant
that the candidate journal should be included in that tier. When 2 journals in
succession were rejected for admission to the tier, the tier was considered
340 MCR&R 59:3 (September 2002)
Downloaded from mcr.sagepub.com at WALDEN UNIVERSITY on October 11, 2012
complete. In that case, the next 4 journals in order by mean rating (including
the 2 rejected journals) formed the standard for the next tier. This process continued
down to the lowest ranked journal.
RESULTS
Out of our sample of 1,254, we received 389 complete responses. In
addition, 132 potential respondents declined participation because of selfperceived
limitations in journal knowledge, and 93 academics were not
contactable via mail due to relocation, retirement, or death. Removing these
nonrespondents from the denominator, we calculated an adjusted response
rate of 37.8 percent. To check our assumption that the HCMD and AUPHA
samples could be combined for analyses,wecorrelated responses for the three
rating tasks.We found strong correlations across quality (r = .92), relevance (r
= .79), and knowledge (r =. 89) ratings.
Furthermore, we addressed the issue of response bias through two analyses.
The first examined the faculty rank of respondents in each of the two
waves and compared it to the rank of nonrespondents.Achi-square test of the
resulting 4 (rank) × 3 (two waves and nonrespondents) table was not significant
(χ2 = 8.46, df = 6). The second analysis examined the comparability of the
two waves of data. Nominal data were examined using chi-square tests of
their distributions, and interval and ratio data were subjected to a one-way
analysis of variance test of mean ratings. Only 12 of the 189 comparisons (6.35
percent) were significant at the .05 level, which is very close to the number of
significant comparisons that would be found by chance alone (10). Taken
together, these analyses lend support to our contention that nonresponse bias
is minimal.
Our sample characteristics reveal that just less than two thirds of respondents
are men with an average age of 49.2 years (SD = 9.28, range = 28 to 86
years). Most are tenured (60.2 percent) and hold full (38.5 percent) or associate
professor (29.2 percent) rank. Slightly more than one quarter work in business
schools, and just less than one half work in schools of public health, schools of
allied health, or health-related professions. They are also active professionally,
with large percentages reviewing for conferences (71.7 percent) and for journals
(83.3 percent). Furthermore, many of our respondents serve on editorial
boards (40.4 percent) or as editor or associate editor of a journal (15.4 percent).
As expected, the range of primary health care expertise (or discipline) is very
broad, with 12.6 percent in health policy/strategy, 15.4 percent in organizational
behavior/theory, 10.5 percent in health economics, and 10.8 percent in
health services research.
Williams et al. / Health Care Management Research 341
Downloaded from mcr.sagepub.com at WALDEN UNIVERSITY on October 11, 2012
Both previous works (Brooks, Walker, and Szorady 1991; McCracken and
Coffey 1996) report the number of respondents rating each journal. We have
formalized this process by collecting ratings of perceived knowledge that are
shown in Table 1. Eleven tiers were created using the tiering process described
earlier. The first tier contained only 4 journals, the NewEngland Journal of Medicine,
Health Affairs, Health Services Research, and the Journal of the American Medical
Association. The next 2 journals were rejected in succession for admission
to this tier because their response distributions were significantly different
from that for the 4 journals of the 1st tier. The subsequent 10 tiers were distinguished
using the tiering process. These ratings are substantially similar to the
percentages of journals rated in the two previous works. For example, Health
Services Research was rated by 47 of 48 graduate program directors in Brooks,
Walker, and Szorady (1991) and 89 percent of the sample in McCracken and
Coffey (1996) against 85 percent in this sample. Furthermore, we found substantial
variation ranging from 93 percent of respondents knowing the New
England Journal of Medicine well enough to rate it to only 6 percent for the Journal
of Legal Medicine. Even more interesting,wefound that only one third of the
54 journals on this list were rated by at least 50 percent of respondents. These
results reflect, in part, the diversity of our sample and the difficulty of knowing
all the potential outlets for health care management research.
After respondents answered the level of knowledge question, they turned
to rating the quality of the journal (see Table 2). Similar to results of the previous
work (Brooks, Walker, and Szorady 1991; McCracken and Coffey 1996),
the NewEngland Journal of Medicine, Health Services Research, the Journal of the
American Medical Association, and Medical Care formed the first tier. Eight other
statistically distinct tiers emerged from our analysis. In terms of similarity, all
of the journals included in the top two tiers of Brooks, Walker, and Szorady
(1991) appear in the top three tiers of this work. Six of the seven first-tier journals
ranked in terms of quality from McCracken and Coffey (1996) are found
in the top two tiers in this work.
Following the classification scheme implicitly included in Brooks,Walker,
and Szorady (1991), we categorized the 54 journals into A, B, or Cjournal
groupings on the basis of perceived quality ratings. Any such attempt is,
admittedly, subjective and open to criticism. Applying this idea to the nine
tiers in Table 2, tiers 1, 2, and 3 are labeled Ajournals; tiers 4, 5, and 6, B journals;
and the final three tiers, Cjournals. Using this categorization, there are 14
journals that could be consideredAjournals, 20Bjournals, and 20 Cjournals.
The final rating question considered was the perceived relevance of the
journal “as an outlet for health care management research” (see Table 3).
Health Affairs emerged on top by a healthy margin, followed by Medical Care
342 MCR&R 59:3 (September 2002)
Downloaded from mcr.sagepub.com at WALDEN UNIVERSITY on October 11, 2012
Williams et al. / Health Care Management Research 343
TABLE 1 Means and Rankings for Perceived Knowledge about Outlets for
Health Care Management Research
Journal n Rank Mean % Rating Tier
New England Journal of Medicine 374 1 4.13 93 1
Health Affairs 377 2 4.04 87 1
Health Services Research 378 3 3.98 85 1
Journal of the American Medical
Association 374 4 3.96 88 1
Health Care Management Review 373 5 3.72 78 2
American Journal of Public Health 376 6 3.68 80 2
Milbank Quarterly 374 7 3.64 82 2
Medical Care 375 8 3.61 75 2
Inquiry 377 9 3.53 74 2
Health Care Financing Review 376 10 3.51 75 3
Journal of Health Politics, Policy,
and Law 377 11 3.39 71 3
Modern Healthcare 374 12 3.24 68 3
Journal of Health Administration
Education 376 13 3.19 65 3
Medical Care Research and Review 375 14 2.97 55 4
Journal of Healthcare Management 377 15 2.84 50 4
Journal of Health Economics 371 16 2.82 51 4
Social Science and Medicine 369 17 2.72 51 4
Frontiers of Health Services Management 375 18 2.70 52 5
Health Services Management Research 374 19 2.61 46 5
Hospitals and Health Networks 376 20.5 2.61 49 5
Healthcare Financial Management 377 20.5 2.50 44 5
The Gerontologist 377 22 2.44 41 6
Public Health Reports 372 23 2.37 40 6
Journal of Health and Social Behavior 375 24.5 2.34 37 6
Health Care Strategic Management 377 24.5 2.34 38 6
Journal of Health Care Finance 375 26 2.25 33 6
Journal of Gerontology 373 27 2.21 31 6
International Journal of Health Services 377 28 2.17 34 7
Hastings Center Report 376 29 2.12 32 7
American Journal of Epidemiology 371 30 2.09 29 7
Medical Group Management Journal 373 31 2.08 34 8
Journal of Health & Human Services
Administration 374 32 2.07 28 8
Journal of Ambulatory Care Management 377 33 2.06 29 8
(continued)
Downloaded from mcr.sagepub.com at WALDEN UNIVERSITY on October 11, 2012
Research and Review, Health Services Research, and Health Care Management
Review. Five more statistically distinct tiers emerged from our analyses. Overall,
this list is substantially similar to the ratings of journal relevance from
McCracken and Coffey (1996), with the interesting exception of Health Affairs,
which ranked in the middle of the second tier (8th out of 31 journals rated) in
the McCracken and Coffey study, whereas it ranked 1st in the current study.
We also grouped journals into A, B, and Ccategories in terms of perceived
relevance (see Table 3).We divided evenly across the six tiers, so that Ajournals
fell within the first two tiers, B journals in the third and fourth theirs, and
Cjournals in the fifth and sixth tiers. This division produced 10 journals in the
Acategory, 16 in the B category, and 28 in the Ccategory. Interestingly, 8 of the
11 journals that would be considered A journals in terms of quality also fall
within the A category in terms of relevance.
To better assess the convergence of these results with those in the two previous
works, we correlated our results with the mean ratings from the quality
344 MCR&R 59:3 (September 2002)
Health Policy 375 34.5 2.02 27 8
Journal of Nursing Administration 376 34.5 2.02 27 8
Hospital Topics 377 36 1.99 28 8
Academic Medicine 377 37.5 1.95 26 8
Journal of Public Health Policy 374 37.5 1.95 25 8
Health Marketing Quarterly 378 39 1.93 23 8
American Journal of Law and Medicine 372 40 1.90 20 9
Daedalus 372 41 1.80 19 9
Nursing Administration Quarterly 374 42 1.79 20 9
Journal of Community Health 377 43 1.75 16 9
Health Progress 375 44 1.69 17 10
Journal of Behavioral Health Services
and Research 376 45 1.67 14 10
Journal of Allied Health 375 46 1.59 12 10
Law, Medicine, & Health Car e 373 47 1.56 10 10
Marketing Health Services 376 48 1.49 11 11
Home Health Care Services Quarterly 377 49.5 1.45 10 11
Health Systems Review 375 49.5 1.45 8 11
Journal of Legal Medicine 374 51 1.42 6 11
Journal of Urban Health 372 52 1.40 7 11
Journal of Medical Systems 375 53 1.38 7 11
Health Matrix 377 54 1.37 7 11
TABLE 1 Continued
Journal n Rank Mean % Rating Tier
Downloaded from mcr.sagepub.com at WALDEN UNIVERSITY on October 11, 2012
Williams et al. / Health Care Management Research 345
TABLE 2 Means and Rankings for Perceived Quality of Outlets for Health
Care Management Research
Journal n Rank Mean Tier A, B, or C
New England Journal of Medicine 343 1 8.00 1 A
Health Services Research 319 2 7.82 1 A
Journal of the American Medical
Association 325 3 7.56 1 A
Medical Care 276 4 7.47 1 A
Journal of Health Economics 184 5 7.46 2 A
Health Affairs 322 6 7.38 2 A
Milbank Quarterly 292 7 7.35 2 A
Inquiry 280 8 7.34 2 A
Medical Care Research and Review 200 9 7.26 2 A
Journal of Health Politics, Policy,
and Law 267 10 7.12 3 A
American Journal of Epidemiology 114 11 6.91 3 A
Daedalus 77 12 6.87 3 A
Journal of Health and Social Behavior 136 13 6.82 3 A
Social Science and Medicine 184 14 6.70 3 A
Journal of Gerontology 117 15 6.64 4 B
Hastings Center Report 120 16 6.57 4 B
American Journal of Public Health 289 17 6.55 4 B
Health Care Financing Review 274 18 6.53 4 B
Health Care Management Review 287 19 6.45 4 B
Law, Medicine, & Health Car e 44 20 6.25 4 B
The Gerontologist 155 21 6.19 4 B
American Journal of Law and Medicine 78 22 6.13 4 B
Health Policy 97 23 6.07 5 B
Health Services Management Research 167 24 6.05 5 B
Journal of Health Care Finance 124 25 5.94 5 B
Journal of Healthcare Management 193 26 5.90 5 B
Frontiers of Health Services Management 194 27 5.80 6 B
Academic Medicine 101 28.5 5.79 6 B
Marketing Health Services 43 28.5 5.79 6 B
Healthcare Financial Management 163 30 5.67 6 B
Public Health Reports 143 31 5.62 6 B
International Journal of Health Services 127 32 5.60 6 B
Journal of Public Health Policy 94 33 5.53 6 B
Journal of Behavioral Health Services
and Research 52 34 5.52 6 B
Journal of Community Health 63 35 5.44 7 C
Health Care Strategic Management 138 36 5.38 7 C
(continued)
Downloaded from mcr.sagepub.com at WALDEN UNIVERSITY on October 11, 2012
and relevance tables in McCracken and Coffey (1996) and fromthe quality ratings
in Brooks, Walker, and Szorady (1991). We found a .92 correlation with
Brooks,Walker, and Szorady and .96 with McCracken and Coffey for quality.
In their earlier work, McCracken and Coffey reported a rank order correlation
of .94 between the 31 journals common to both studies. For perceived relevance,
wefound a .88 correlation with the relevance ratings of McCracken and
Coffey. These analyses show very substantial convergence across the three
studies.
Consistent with the previous works (Brooks, Walker, and Szorady 1991;
McCracken and Coffey 1996),weasked respondents to nominate journals that
they believed should have been included in the present study. Forty-three separate
publications were named. Of these, 30 (70 percent) publications were
nominated by only one respondent, 7 (16 percent) by two respondents, and 6
by three or more respondents. Following the criteria for list inclusion used by
Brooks, Walker, and Szorady (1991), the journals nominated by at least three
respondents were Academy of Management Journal, Advances in Health Care
Management, Harvard Business Review, Journal of Rural Health, Managed Care
346 MCR&R 59:3 (September 2002)
Journal of Legal Medicine 27 37 5.37 7 C
Journal of Nursing Administration 101 38 5.26 7 C
Journal of Medical Systems 27 39.5 5.22 7 C
Nursing Administration Quarterly 79 39.5 5.22 7 C
Journal of Health & Human Services
Administration 101 41 5.20 7 C
Journal of Health Administration
Education 238 42 5.17 7 C
Journal of Urban Health 27 43 5.04 7 C
Health Systems Review 31 44 5.00 8 C
Journal of Allied Health 50 45 4.94 8 C
Journal of Ambulatory Care Management 106 46 4.89 8 C
Health Marketing Quarterly 87 47 4.87 8 C
Health Matrix 27 48 4.85 8 C
Medical Group Management Journal 117 49 4.81 8 C
Health Progress 62 50 4.55 8 C
Home Health Care Services Quarterly 41 51 4.32 8 C
Hospital Topics 109 52 4.29 9 C
Hospitals and Health Networks 183 53 4.08 9 C
Modern Healthcare 249 54 4.04 9 C
TABLE 2 Continued
Journal n Rank Mean Tier A, B, or C
Downloaded from mcr.sagepub.com at WALDEN UNIVERSITY on October 11, 2012
Williams et al. / Health Care Management Research 347
TABLE 3 Means and Rankings for Perceived Relevance of Outlets for
Health Care Management Research
Journal n Rank Mean Tier A, B, or C
Health Affairs 312 1 7.49 1 A
Medical Care Research and Review 195 2 7.11 1 A
Health Services Research 307 3 7.07 1 A
Health Care Management Review 278 4 7.03 1 A
New England Journal of Medicine 338 5.5 6.93 2 A
Inquiry 272 5.5 6.93 2 A
Medical Care 270 7 6.87 2 A
Milbank Quarterly 286 8 6.76 2 A
Journal of Health Politics, Policy,
and Law 258 9 6.74 2 A
Health Care Financing Review 267 10 6.72 2 A
Journal of Healthcare Management 187 11 6.63 3 B
Frontiers of Health Services Management 190 12 6.53 3 B
Journal of the American Medical
Association 317 13 6.52 3 B
Health Services Management Research 161 14 6.43 3 B
Healthcare Financial Management 159 15 6.30 3 B
Journal of Health Care Finance 120 16 6.29 3 B
Law, Medicine, & Health Car e 39 17 6.28 3 B
Journal of Health Economics 179 18 6.16 4 B
Social Science and Medicine 181 19 6.07 4 B
Health Policy 94 20 5.95 4 B
Hastings Center Report 115 21 5.91 4 B
Marketing Health Services 41 22 5.88 4 B
Modern Healthcare 240 23.5 5.87 4 B
Medical Group Management Journal 115 23.5 5.87 4 B
American Journal of Law and Medicine 73 25 5.86 4 B
Health Care Strategic Management 136 26 5.84 4 B
American Journal of Public Health 280 27 5.81 5 C
Journal of Health Administration
Education 235 28 5.68 5 C
Journal of Health and Social Behavior 131 29 5.66 5 C
Journal of Gerontology 111 30 5.64 5 C
Hospitals and Health Networks 175 31 5.63 5 C
International Journal of Health Services 125 32 5.61 5 C
Nursing Administration Quarterly 75 33 5.59 5 C
American Journal of Epidemiology 108 34 5.58 5 C
(continued)
Downloaded from mcr.sagepub.com at WALDEN UNIVERSITY on October 11, 2012
Quarterly, and Nursing Economics. Academy of Management Journal and Harvard
Business Review are both general business periodicals, and Advances in Health
Care Management is an annual review series. Given the focus of this and the
previous studies on health care–specific journals published several times a
year, we recommend that Journal of Rural Health, Managed Care Quarterly, and
Nursing Economics be included in future research.
DISCUSSION
This study was a replication and extension of the works of Brooks,Walker,
and Szorady (1991) and McCracken and Coffey (1996). The twocommongoals
of these works and our study are to understand how researchers in this area
value various journals as outlets for their work and to provide a benchmark by
which to evaluate publication records. Our results track very closely the two
348 MCR&R 59:3 (September 2002)
Journal of Ambulatory Care Management 105 35.5 5.56 5 C
The Gerontologist 149 34.5 5.56 5 C
Academic Medicine 100 37 5.53 5 C
Journal of Public Health Policy 91 38 5.51 5 C
Journal of Behavioral Health Services
and Research 51 39.5 5.47 5 C
Public Health Reports 137 39.5 5.47 5 C
Journal of Community Health 61 41 5.44 5 C
Journal of Health & Human Services
Administration 98 42 5.43 5 C
Journal of Legal Medicine 27 43 5.41 5 C
Journal of Nursing Administration 99 44 5.36 5 C
Journal of Medical Systems 26 45 5.35 5 C
Health Marketing Quarterly 86 46 5.26 5 C
Hospital Topics 101 47 5.21 6 C
Daedalus 77 48 5.19 6 C
Health Progress 59 49 5.19 6 C
Health Systems Review 30 50 5.13 6 C
Journal of Urban Health 28 51 4.96 6 C
Health Matrix 27 52.5 4.78 6 C
Home Health Care Services Quarterly 40 52.5 4.78 6 C
Journal of Allied Health 53 54 4.70 6 C
TABLE 3 Continued
Journal n Rank Mean Tier A, B, or C
Downloaded from mcr.sagepub.com at WALDEN UNIVERSITY on October 11, 2012
previous studies, suggesting strong convergence. This convergence is particularly
revealing given the use of several distinct populations sampled across
these three studies.
We included a novel measure of the respondents’ knowledge of the journals
in this field and were surprised at the range of journal knowledge from a
high of 93 percent for the NewEngland Journal of Medicine to a low of 6 percent
for the Journal of Legal Medicine. This is, of course, considerably different in
magnitude than the knowledge levels shown in the earlier two studies
(Brooks, Walker, and Szorady 1991; McCracken and Coffey 1996). However,
the overall rankings are fairly similar.We also found that only one third of the
54 journals were sufficiently well known to be rated by at least 50 percent of
our respondents. One potential explanation for our results is the greater diversity
of our sample. This may also be a reflection of the variety of fields and
interests in which the respondents are engaged and the manifold difficulties in
staying abreast of all the developments in the health care management field.
Related to our finding about level of journal knowledge among faculty is
the idea that a set of core journals in the health care management field could be
derived. In fact, McCracken and Coffey (1996) presented a list of 31 journals
they considered “core” to business-oriented health care faculty. While this list
might be useful to health administration faculty with a business orientation,
such a list is less valuable to the field as a whole given the variety of disciplines
contributing to the study of health care management. In fact, one only needs to
look at the mapping of journal types onto tiers, which was undertaken by
Brooks,Walker, and Szorady (1991, 760) to see that there are different clusters
of journals relevant to specific disciplinary groups practicing in health care
management.
Another difficulty was in trying to determine whether perceptions of journal
quality or relevance had changed over time. McCracken and Coffey (1996)
compared their rankings with those developed by Brooks, Walker, and
Szorady (1991) and found that some of the rankings had changed between the
studies. However, they were unable to definitely attribute this difference to
actual changes in perception, given the large differences in sample and number
of journals included. Because of these difficulties and the large correlations
between our measures and those in the previous works, we concluded that
trying to track changes is impractical at this time. However, this study might
be useful as a benchmark to which future studies could be compared to track
perceptual changes across time.
An interesting finding, which is consistent with Brooks, Walker, and
Szorady (1991), is that administratively oriented journals such as Health Care
Management Review, Healthcare Financial Management, Journal of Health and
Williams et al. / Health Care Management Research 349
Downloaded from mcr.sagepub.com at WALDEN UNIVERSITY on October 11, 2012
Human Services Administration, and Frontiers of Health Services Management are
perceived by members of HCMD and AUPHA as somewhat less prestigious
publication outlets than health policy–oriented journals such as Health Services
Research, Medical Care, and Medical Care Research and Review. Within our
rating scheme, the administratively oriented journals would fall in the B
range. This is not to suggest that these journals lack quality, far from it. Rather,
it is an interesting perceptual bias within this sample, especially since several
administratively oriented publications were ranked higher on relevance than
their policy-oriented and clinically oriented counterparts. Such findings do
lead us to suggest that in some institutions, academics with a record of having
published only in administratively oriented journals might find defending
their records during the evaluation process more challenging. For those with
an interest in administrative research, submitting some manuscripts to highly
ranked health policy journals or to prestigious administrative journals outside
the field of health administration might be an advisable strategy (Park and
Gordon 1996; Tahai and Meyer 1999).
LIMITATIONS
One of our principal limitations lies in our response rate. Nonetheless, the
response rate of 37.8 percent is consistent with, but higher than, the average
response rate for mail surveys containing 150 to 200 items (Yu and Cooper
1983). The less than 50 percent response rate was due, in part, to the survey’s
length and the diversity of our sample. Some of the people sampled did not
consider themselves part of the health care management field. Furthermore,
we identified 132 people who felt comfortable in evaluating only a handful of
journals and declined to respond on that basis.
Our second limitation is the same as one of the limitations discussed in the
McCracken and Coffey (1996) study: Social Science Citation Index citation
data to validate the ratings are available for a minority of the journals on our
list. McCracken and Coffey reported that only 10 of their 31 journals had citation
data, while we found citation data for 26 of 54 journals. However, Dame
and Wolinsky (1993), using the Social Science Citation Index information
available (on 24 journals), found correlations of .46 to .55 on three bibliographic
measures (impact factors). Using the textbook citation method suggested
in Brooks, Walker, and Szorady (1991), Larson and Kershaw (1993)
found correlations of .60 and .35 with two different ranking methods.
Together, these studies suggest that the perceptional measures used in this line
of research have a moderate level of validity.
350 MCR&R 59:3 (September 2002)
Downloaded from mcr.sagepub.com at WALDEN UNIVERSITY on October 11, 2012
FUTURE RESEARCH
There are a number of directions that future research in this area can take.A
critical need concerns the assessment of the changes in perceptions of journals
across time. Given the dissimilarity of samples and journals used in previous
works, a thorough analysis of changing perceptions is difficult. We recommend
that a survey similar to ours be conducted in 5 years. Once the future
study is conducted, our study can serve as a benchmark for comparison.
Asecond issue for future research is how to identify and add new journals
to the list presented here. All three studies have allowed respondents to identify
journals for potential inclusion. If a sufficient number of recommendations
were made for a single journal, it was added. For example, McCracken
and Coffey (1996) suggested adding two journals to their list of “core” journals
based on respondent requests. Indeed, one editor of a relatively new journal
contacted the primary author of this study asking why that particular journal
was not included in the survey. The lesson from the ensuing discussion was
that a more proactive stance for identifying new journals in the field and adding
them to future research is recommended. The downside to such an
approach is the likely negative impact on response rates due to increased
response burden.
CONCLUSION
Because it is important to our intellectual community to value the currency
of our exchange and provide conceptual guideposts for community members,
it is worthwhile to occasionally update a benchmark of publication outlets.
This study updates this benchmark in terms of quality, relevance, and knowledge
of journals in the health care management field.We hope that our efforts
will prove valuable to our colleagues.
REFERENCES
Brooks, C. H., L. R.Walker, and R. Szorady. 1991. Rating journals in health care administration:
The perceptions of program chairpersons. Medical Care 29 (6): 755-63.
Coe, R., and I. Weinstock. 1984. Evaluating the management journals: A second look.
Academy of Management Journal 27 (3): 660-66.
Dame, M. A., and F. D. Wolinsky. 1993. Rating journals in health care administration:
The use of bibliometric measures. Medical Care 31 (6): 520-24.
Keller, G., B.Warrick, and H. Bartell. 1988. Statistics for management and economics:Asystematic
approach. Belmont, CA: Wadsworth.
Williams et al. / Health Care Management Research 351
Downloaded from mcr.sagepub.com at WALDEN UNIVERSITY on October 11, 2012
Larson, J. S., and R. Kershaw. 1993. Rating journals in health care administration by the
textbook citation method. Medical Care 31 (11): 1057-61.
McCracken, M. J., and B. S. Coffey. 1996. An empirical assessment of health care management
journals:Abusiness perspective. Medical Care Research and Review 53:48-70.
Park, S. H., and M. E. Gordon. 1996. Publication records and tenure decisions in the
field of strategic management. Strategic Management Journal 17:109-28.
Tahai, A., and M. J. Meyer. 1999.Arevealed preference study of management journals’
direct influences. Strategic Management Journal 20:279-96.
Yu, J., and H. Cooper. 1983.Aquantitative review of research design effects on response
rates to questionnaires. Journal of Marketing Research 20:36-41.
352 MCR&R 59:3 (September 2002)
Downloaded from mcr.sagepub.com at WALDEN UNIVERSITY on October 11, 2012
please use the format below to write the article
h t t p : / / t w p . d u k e . e d u / w r i t i n g – s t u d i o
How to Read and Review a Scientific Journal Article:
Writing Summaries and Critiques
Definition of Genre
Summaries and critiques are two ways to write a review of a scientific journal article. Both types of writing ask you first to read and understand an article from the primary literature about your topic. The summary involves briefly but accurately stating the key points of the article for a reader who has not read the original article. The critique begins by summarizing the article and then analyzes and evaluates the author’s research. Summaries and critiques help you learn to synthesize information from different sources and are usually limited to two pages maximum.
Actions to Take
1. Skim the article without taking notes:
 Read the abstract. The abstract will tell you the major findings of the article and why they matter.
 Read first for the “big picture.”
 Note any terms or techniques you need to define.
 Jot down any questions or parts you don’t understand.
 If you are unfamiliar with any of the key concepts in the article, look them up in a textbook.
2. Re-read the article more carefully:
 Pay close attention to the “Materials and Methods” (please note that in some journals this section is at the very end of the paper) and “Results” sections.
 Ask yourself questions about the study, such as:
o Was the study repeated?
o What was the sample size? Is this representative of the larger population?
o What variables were held constant? Was there a control?
o What factors might affect the outcome?
3. Read the “Materials and Methods” and “Results” sections multiple times:
 Carefully examine the graphs, tables, and diagrams.
 Try to interpret the data first before reading the captions and details.
 Make sure you understand the article fully.
4. Before you begin the first draft of your summary:
 Try to describe the article in your own words first.
 Try to distill the article down to its “scientific essence.”
 Include all the key points and be accurate.
 A reader who has not read the original article should be able to understand your summary.
 Example of a well-written summary:
The egg capsules of the marine snails Nucella lamellosa and N. lima protect developing embryos against low-salinity stress, even though the solute concentration within the capsules falls to near that of the surrounding water within about 1 h.
5. Write a draft of your summary:
 Don’t look at the article while writing, to make it easier to put the information in your own words and avoid unintentional plagiarism.
 Refer back to the article later for details and facts.
 Ask yourself questions as you write:
o What is the purpose of the study? What questions were asked?
o How did the study address these questions?
o What assumptions did the author make?
o What were the major findings?
o What surprised you or struck you as interesting?
o What questions are still unanswered?
Format
 A complete citation of the article goes at the top of the page, below your heading.
 Don’t skip a line between the citation and the start of the essay.
 Indent the first line of the essay.
 Be concise and eliminate superfluous information.
Organization
 The introductory paragraph summarizes the background information and purpose of the research (specific questions the study researched).
 Then, explain the methods that were used to investigate the research questions (use past tense).
 Mention the major results of the study (use past tense).
 State what the author of the study learned.
Critique: A Critical Review and Assessment of the Article
 Include a summary as well as your own analysis and evaluation of the article.
 Know the article thoroughly.
 Do not include personal opinions.
 Be sure to distinguish your thoughts from the author’s words.
 Focus on the positive aspects and what the author(s) of the study learned.
 Note limitations of the study at the end of the essay:
o Do the data and conclusions contradict each other?
o Is there sufficient data to support the author’s generalizations?
o What questions remain unanswered?
o How could future studies be improved?
This handout is condensed from information published in Pechenik, Jan A. “Writing Summaries and Critiques.” A Short Guide to Writing about Biology. Ed. Rebecca Gilpin. 6th ed. New York: Pearson, 2007. 130-138. Print.
Helpful Links:
 How to read a scientific paper (biology):
https://www.biochem.arizona.edu/classes/bioc568/papers.htm
 Finding and reviewing appropriate scientific articles. This site will help you select up-to-date and relevant articles for your review (biology):
https://www.stanford.edu/~siegelr/readingsci.htm
 
Looking for the best essay writer? Click below
to have a customized paper written as per your
requirements.
 
 ,Journal article review