Students: consumers of education?

“The customer is always right”.  “Customer satisfaction”.  Those are phrases that get tossed around in pop culture all the time, but do they have any place on a university campus?  I read an interesting article about rewarding good teaching at Texas A&M by having students judge the best teacher and then giving out substantial sums of money – http://www.insidehighered.com/news/2009/01/13/bonuspay.  Seems like a good idea, and it could go a long way to help improve general teaching but does it have any merit?

It reminds me of something I read a month ago about students as consumers of education.  For many years I considered that approach – the product is the degree, the students are paying for it, it made sense to think that the student is the consumer and to carefully consider their satisfaction.  The good news is that I’ve moved on to something I am more philosophically comfortable with, the bad news is that it sounds bad from a student’s perspective.  My new belief is that the learner is NOT the consumer, the learner is the product.

Following that thought, educators should not be focused making or keeping students happy (although that is important).  Rather, we need to ensure that the goals and objectives of your program get instilled and built within each graduating student.  As an analogy, take a raw piece of aluminum and construct an automobile.  Imagining that the aluminum has feelings etc – it is doubtful that the aluminum enjoys being stretched and reshaped from being a block of raw aluminum into some fantastic useful shape but such reshaping is necessary.

So does this mean I don’t care about students?  Absolutely I do, no question.  Caring about a student’s long term growth and development is exactly why I have such a philosophy.  Do I think that a class or a course should be a painful transition or reshaping?  Yes and no.  I have no doubts that a happy learner is a motivated learner.  Learning should be fun, but it shouldn’t be easy.  If your personal trainer only ensured that you were happy and comfortable, its doubtful you’d ever improve your fitness.  Perhaps education is not any different?

Comments from students are definitely appreciated!

A long time away

OK, so I haven’t posted anything on here for almost a year.  When I stopped posting last February, I was running into this conflict of writing about my teaching and learning in a personal way without pissing off students.  I have yet to come up with a complete answer to that but I need to share some interesting stories and do some venting!

First up – I read through my posts last year and noticed the Dec 7/07 one about congratulating the UFE grads.  I have similar sentiments for 2008 (well done!) but I took my concern about the UFE being the end-all measurement to the next level by writing a short article for the CAAA newsletter questioning the use of the UFE pass statistics: 

Some Thoughts From the Education Chair – Fall 2008

Have you ever thought about how good (or not) your institution’s accounting program is? Program evaluation is an important component to curricular improvements, yet few academics have any training in it. It’s not surprising then, that we are tempted to use inappropriate evaluation tools to measure the success of our programs. 

The professional accounting exams in Canada, such as the UFE, may be adequate or even excellent evaluation tools for determining whether individual students are qualified to obtain their professional accounting designation. Those same exams, however, are not good measures of your program’s success. During my short academic career I have experienced first hand two situations that you may recognize. First, School X’s accounting program was being “beaten” year after year by a nearby “competitor”, School Y. The faculty at School X had numerous meetings to try to figure out why this could be and what was wrong with their program?

Second, School Z was very proud of their student’s high?pass rate on the UFE, and would not hesitate to informally advertise it. 

I support internal program review, curriculum enhancement, and responsible advertising, but both situations mentioned above fail to make me smile. There are two flaws in the “logic” inherent in School X and School Z’s reactions. First, instead of reflecting solely on output measures, perhaps we should consider “value?added” measures instead. Second, professional education such as accounting should not be constrained to or even focused on one measure of performance, particularly one with such a short horizon after graduation from our programs. 

Attending one of our accounting programs hopefully enhances students’ abilities, performance, skills, and attributes. However, we can’t dismiss the importance of the students’ first 18 or 19 years of life before they began at our institution. Perhaps we should consider a measure such as Education Value Added (EVA) – that is, how much does your program add to a student’s development. In my opinion, programs with high EVA deserve more respect than ones with high UFE pass rates but low EVA. 

The second flaw in the “logic” is that each program supposedly has the same objective. While each program across Canada is concerned about helping its students achieve success on the professional exams, some programs likely have a (thankfully) much broader objective. I enjoyed watching Usain Bolt’s two gold-medal performances at the 2008 Olympics. I also watched American Bryan Clay win the decathlon event. Both gentlemen are terrific athletes; each has chosen specific events or objectives to pursue. Clearly Bolt could beat Clay at the 100m and 200m events, I suspect that Clay could beat Bolt at eight other events. Bolt surely does not wake up in the middle of the night concerned that Clay could beat him at the shot put or high jump. Likewise, once we’ve chosen appropriate objectives for our program, let us not become distracted by inappropriate comparisons.

Professional exam results can be used responsibly to help evaluate our individual programs. As we modify our curriculum, the year?over?year professional exam results may be useful as part of an overall evaluation strategy that needs to measure a broad set of personal attributes, professional skills, and technical abilities.  When the exam results come out at the end of November, I encourage you to first phone up a few past students that have been successful and congratulate them – that surely is the key purpose of publicly releasing the results. Then sit down with your colleagues and carefully think about how you can most effectively use that data to improve your program.

 

Sandy Hilton