[IGFmaglist] Proposal for Modification for the Workshop Review and Evaluation Process
Juan Fernandez Gonzalez
juan.fernandez at mincom.gob.cu
Tue May 31 19:06:11 EDT 2016
First of all let me congratulate Rasha and her collaborators for conducting a critical analysis of the established workshop review and evaluation process.
But I am not totally in agreement with some of the conclusions.
Let me begin with the last of the three concerns and them move up:
The scores are subjective in nature: That is totally true, so the evaluations are completely relative, and yes, that is why MAG members feel they need to evaluate every workshop.
I think that it is impossible to avoid this and to have an “absolute” rating system.
So from the above, the criteria, being 10 or 5 is totally irrelevant, because the evaluations are always subjective and relative. This is, what a MAG member can decide is if one workshop is “better” that another, and so to order them in order that if a cut off is set, let’s say, after the first 100 workshops, then he knows which workshops to recommend and which not. I placed quotes on “better”, because I will come back to this concept later.
So having said this about the two last concerns, then we get to the topic of how many workshops proposals the MAG members should evaluate.
My answer to this? ALL.
The only solution to a subjective and relative evaluation process based in arbitrary criteria is a statistical solution.
This means that the only way to try to reach an “objective” evaluation is to have as many and diverse evaluators possible.
And I agree with what was said the evaluation of workshops is one of the fundamental tasks of MAG members, so we have to do it.
Of course, after nearly of 40 years grading exams, test and papers of university students, I can tell you that there exists methods to evaluate large numbers of exams on a fixed deadline. The most common you all know it: the low pass, high pass and doubt method.
This is to classify the proposals in three groups after a quick initial review: the ones that you are “sure” that will pass, the one that you are “sure” that will not pass, and the “doubt” group. I quoted the word “sure” because nothing is carved in stone (remember subjectivity) and maybe in a second pass some proposals can move in or out of any of these three groups.
The process is iterative and this three groups are refined after each rounds until you get a stable relative evaluation of the workshops.
So to summarize my recommendations:
I think that ALL MAG members should evaluate ALL workshops proposals, based on whatever criteria each one believes is relevant to decide if a workshop proposal is “better” than other.
What we can discuss is what we consider as “better”, remembering that it is almost impossible to have absolute measurements, and that many aspects has to be considered and not only the ones in the proposed system.
There are some criteria that are not part of the proposals. For example, if there is only one proposal on a topic that the MAG decides that it is very important in this 2016, maybe that proposal should be admitted even if it is not as well presented as some other workshop on a topic in which there is an abundance of proposals. Something similar can be said about a proposal that brings as speakers very relevant persons, etc., etc.
Well, I will leave it here.
De: Zeina BOUHARB [mailto:zbouharb at yahoo.com]
Enviado el: martes, 31 de mayo de 2016 3:02
Para: Renata Aquino Ribeiro; Dr. Rasha Abdulla; Chengetai Masango; Eleonora Anna MAZZUCCHI
CC: IGF Maglist
Asunto: Re: [IGFmaglist] Proposal for Modification for the Workshop Review and Evaluation Process
Thank you for the work done, I guess it would be great to see the result in an evaluation grading sheet.
My concern is that we are categorizing 4 sets of criteria without determining a basic ordering from highest importance to least importance. Is Relevance equal to Format for example in grading?
I propose to assign weights to each set after scoring.
The scoring used might be 0-2 where 0 means does not meet criteria, 1 means partially meet criteria and 2 if the proposed workshop fully meets the evaluation criteria.
OGERO Telecom/ MoT
Mobile: + 961 3876436
From: Renata Aquino Ribeiro <raquino at gmail.com>
To: Dr. Rasha Abdulla <rasha at aucegypt.edu>; Chengetai Masango <cmasango at unog.ch>; Eleonora Anna MAZZUCCHI <EMAZZUCCHI at unog.ch>
Cc: IGF Maglist <igfmaglist at intgovforum.org>
Sent: Monday, 30 May 2016, 15:46
Subject: Re: [IGFmaglist] Proposal for Modification for the Workshop Review and Evaluation Process
Dear Rasha and all
Thanks for submitting this proposal on workshop evaluation to MAG
Along the main lines of the proposal, it provides a good way to modify
the evaluation and streamline the process.
I am in agreement.
Thanks also to the team who participated on this.
I do have a few doubts, one of them to Secretariat
* ****Secretariat please see*****
From the text:
"The secretariat will provide information on whether or not this is a
debut (first time) proposal. There could be a separate pool for debut
presentations, or a certain number of points could be added to a debut
presentation. Such point value would be determined for the first year
once all the scores come in. This could be done at the NYC MAG
Does the Secretariat have this info?
* Question to MAG Members (and all who would like to pitch in)
From the text:
"If all feedback is given to workshop proposers (including the
scores), they would be able to know the strengths and weaknesses of
their proposal just by looking at the different scores and knowing
which items scored less than others. This would also help them make
better proposals the following year."
Is it possible, to have a second chance evaluation for some workshops?
Could this same feedback be directed to that?
My ask comes from the fact that sometimes works just need minor
modifications to be adequate to presentation. Should this be taken
into account here?
Also another issue
As the MAG members are still discussing the adequate criteria for MAG
members participation, it should be clear whether the rule applied
would result in modification of workshop proposal and it would be
important this is in the evaluation process publicly shared.
On Sun, May 29, 2016 at 7:23 AM, Dr. Rasha Abdulla <rasha at aucegypt.edu<mailto:rasha at aucegypt.edu>> wrote:
> Dear MAG members,
> Following the Secretariat's green light, I have finalized the proposal for
> modifying the Workshop Review and Evaluation Process. This proposal tackles
> only the second stage of the review process, that of evaluation by MAG
> members. The first stage (the Secretariat screening), as well as the third
> stage (final decisions re borderline cases, mergers, etc) remain unchanged.
> I hope this proposal arrives at a middle ground for this year that takes
> care of most of the concerns raised. It also reduces the subjectivity in
> evaluation, and it considerably reduces the work load per MAG member. Many
> thanks to Flavio, who suggested the work distribution among MAG members, and
> to Susan for her comments on the whole process. I'm attaching the new
> proposal on the second stage of reviewing as well as the current document
> for the whole review process.
> In the interest of time before our next virtual meeting, and since there was
> little interaction on the WG mailing list, I'm hereby offering the proposal
> to the full list of MAG members for consideration. I request that the
> Secretariat include this on Wednesday's meeting agenda if possible.
> Best regards.
> Rasha A. Abdulla, Ph.D.
> Associate Professor and Past Chair
> Journalism and Mass Communication
> The American University in Cairo
> Twitter: @RashaAbdulla
> Igfmaglist mailing list
> Igfmaglist at intgovforum.org<mailto:Igfmaglist at intgovforum.org>
Igfmaglist mailing list
Igfmaglist at intgovforum.org<mailto:Igfmaglist at intgovforum.org>
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Igfmaglist