Posted by:

Why I might reject your research paper – an alternative view

I sit on the editorial board of three ELT-related journals and conduct reviews for two more. Respect me now! Yes, this is where old teachers go once we’ve grown weary of the conference circuit. I’m happy to help young ELT researchers spruce up a manuscript (certainly more so than when writing my own). 

 

In order to find out what causes most research journal reviewers to lose hair Google will be your trusty guide and source of wisdom — someone there else has said all of it better than me. Today though I want to focus on some lesser-known flaws, personal peccadillos that I think should be a part of the ELT research DON’T DO IT canon:

 

Let’s not beat around the bush. ELT research always involves two ephemeral factors: people and languages. Neither are as inherently reliable, empirical, or objective as STEM topics. We shouldn’t pretend otherwise. I say this because many  ELT research journals, and the researchers who contribute papers, often do their best to ignore this elephant in the lab. They would like to believe that if a statistic appears in a paper the whole enterprise has somehow entered the world of hardcore scientific enquiry. This often means blindly following the IMRaD (introduction, methods/materials, results, discussion) formula, as if all research writing was for the New England Journal of Medicine. In order to ground their research more ‘scientifically’ and to give credence to crunching numbers many ELT researchers utilize a few questionable approaches:

 

Student survey fetishes

 

Student surveys, properly designed and appropriately applied, can be a useful and valid research tool. The dangers of constructing invalid surveys are legion and, once again, Google will be your friend if you want to know what they are. However, an increasing problem that deserves special mention is that of using ‘feel’ questions, such as:

 

‘Do you feel that your performance improved due to the X sessions?’

‘Do you feel that you are now better able to carry out task Y?’

 

The problem here is that such questions attempts to target the respondents’ feelings about something, not tangible results or experiences. In fact, such questions fail to even illuminate the respondents’ feelings because the questions reveal only what they say their feelings are, which are often not accurate reflections of one’s actual feelings. Respondents may also write what the survey-taker wants to hear, or they may not take the survey seriously at all and provide unreflective, ill-considered responses.

 

Much more telling would be questions rooted in experiences:

‘Which of the following types of English programs have you engaged in in the past?’

‘For how long?’

‘From what age to what age?’

 

These at least provide a more secure foundation for further interpretation and ultimately making claims, suggestions, or recommendations.

 

Unfortunately, all too many ELT researchers use the numbers generated from the ‘feely’ surveys to make grand statements regarding suggested pedagogies or policies. ‘Our research shows that…‘ as if by crunching the numbers the survey’s inherent ‘feeliness’ has somehow been mitigated and the whole inquiry is now legitimized.

 

We put a stuffed giraffe in the classroom…and it worked!

 

The South Park elves’ success formula has become an internet meme:

Step 1: Collect underpants

Step 2: ?????

Step 3: Profit!

 

Unfortunately, a fair amount of ELT research suffers from the same dubious logic. It tends to go something like this:

Background: Our students were doing X

Method: We added factor Y to their program

Result: Performances improved

Interpretation/Discussion: The introduction of Y raised student performances

 

The problem is that if the study is in any way short-term longitudinal (comparing performance before and after) it is natural and expected that between point A and B student performance will improve — simply by virtue of the fact that they are studying in a classroom. Attributing the jump in performance to factor Y is often simply a case of confirmation bias. Hence, the stuffed giraffe:

Place a stuffed giraffe in the classroom at point A

Teach and practice English

Test at point B Scores improve

Man, that stuffed giraffe must have worked its magic!

 

Manipulating pre and post test results

 

An anecdote is in order here. Several years back, an official at my university thought it would be beneficial to show that English scores for medical students had improved between entry and the end of first year. This would, apparently, indicate the ‘success’ of our programs and, presumably, justify our existence. I was the one tasked with doing this. The powers-that-be wanted something ’empirical and objective’ so I made a simple slot-n-filler test in which certain medically-related items had to be correctly inserted. Such as:

 

Questions about the start of the patient’s condition are referred to as ________ questions. (not an actual example)

 

There would be a list of terms on the test from which the students could choose answers, including ‘onset‘, the correct response above. Generally, the new students scored about 10 out of 40. Then they took my classes, where these items came up regularly as part of practicing medical discourse (taking histories, case presentation etc.), and the same test was administered again at the end. The results? An average of 36. 

 

These results were hailed by the admin as indicators of the success of our program: ‘The scores went up over 300%!!’ 

 

Well, of course they did, because the students — prepare yourselves for this bit of Einsteinian logic — took the damn course!

 

Also, beware of the following research paper dogs:

 

a. Reporting on what you or your institution does in their English program without connecting it in any way that might be beneficial to other teachers or course designers. A status or field report is not a research paper.

 

b. Forcing the APA style onto a manuscript that in no way should be rendered as a ‘scientific’ paper (which is fine — not every academic breakthrough or item of interest is fully ‘scientific’). This is often the fault of journal policy — falsely believing that dictating a strict APA science paper style gives the content magical legitimacy and forces otherwise interesting rhetoric into cubby holes that they are completely unsuited to.

 

c. Writing a paper as if preparing for a PhD defense. Too many needless references, particularly for self-evident claims ( ‘Doofenshmirtz [2006] notes that English is widely taught as a second language throughout Asia‘), and too many forced usages of the formulaic academic phrases that ‘sensei’ expects you to have mastered. 

 

d. Glib endings. Among the hall-of-famers found in this category are:

More research is needed

We need to think about X more and more

Overreaching, dramatic finales (often reminiscent of bad TV adverts) such as: ‘30 minutes per day with Prokop Blocks can put children on the path to fluency and unclog your storm drains.’

 

Hopefully, this post will also unclog your storm drains. But more research is needed.

Mike Guest

Mike Guest

Michael (Mike) Guest is Associate Professor of English in the Faculty of Medicine at the University of Miyazaki (Japan). A veteran of 25 years in Japan, he has published over 50 academic papers, 5 books (including two in Japanese), has been a regular columnist in the Japan News/Yomiuri newspaper for 13 years, and has performed presentations and led workshops and seminars in over 20 countries. Besides ranting and raving, his academic interests include medical English, discourse analysis, assessment, teacher training, and presentation skills.
Mike Guest

4 Responses to Why I might reject your research paper – an alternative view

  1. Another great read and very informative for someone new to the scene such as myself. #SouthParkFormula #Einsteninian made me laugh but so true

  2. Glad you enjoyed it! For anyone new to writing academic papers (which are necessary for securing university positions) I strongly urge you to look at the many excellent websites which go over the most common pitfalls. However, I would also urge caution to avoid outdated notions (insistence upon IMRaD, or the PhD style approaches) or sites that seem to conflate all academic writing with constructing scientific reports.

  3. This is a really interesting blog that I’m glad I’ve found. On a related note to what you write, you should compare the JACET Call for Papers in Japanese and English this year. English: ” Include aim, hypothesis, method, results, and conclusion. The results are essential.” Japanese: “研究の目的、仮説、方法、結果、結論など(結果が書かれていないものは無効です)”. Both are clearly the IMRaD model. As someone who also publishes in a humanity, it feels a bit weird to see the strong insistence on imposing this approach on English language education — where as you point out: *teaching* (or trying to teach) almost always yields better results than … *not teaching*

  4. Thanks for the comment.
    Yes, although JACET can be a helpful organization their publications board is behind the curve and can be very bloody-minded. Their insistence upon the IMRaD formula forces many teachers/researchers to resort to flawed surveys or dubious corpus analysis in order to meet the criteria of being ‘evidence-based’– which I suppose allows the editors access to the elite STEM researchers’ jacuzzi.

    As a result, phenomena or observations that might be valid and interesting come off in many such papers as being overly contrived.

Leave a Reply

Your email address will not be published. Required fields are marked *

*