[Wgwseval] Call for input

SUTO Timea Timea.SUTO at iccwbo.org
Fri May 18 14:57:58 EDT 2018


Dear Rasha, colleagues,



Following your questions on the next steps for workshop evaluation, please see below a few thoughts and suggestions:



1. On grading and baskets

I think there is a lot of merit in MAG members being assigned to evaluate workshops in a certain category, so that they are fully aware of similar workshops, can compare on a curve and have a better understanding for possible mergers.



At the same time, I don’t think there is a need for the evaluators to have a subject-matter expertise on the category they will evaluate. Workshops at IGF tend to target large audiences (not just experts) and to bridge different issues and topics. It’s more important that evaluators judge the relevance of the issue to be discussed at the IGF, the diversity of opinions represented on the organizing team and panels, the creativity and interactivity of the format and agenda as well as the suitability of the format to the proposed agenda. I think members of the MAG have the experience and exposure to IGF necessary to make an informed decision based on these principles.



In fact, if we would judge topics we know well and are active in, it is more likely we face proposals of our colleagues and/or competitors, which only increases the likelihood of bias.



This being said, if someone feels very strongly about not judging a certain topic in general, this can be signaled to the Secretariat. At the same time if any one proposal seems too technical or granular for the evaluator, they can always opt out from judging that particular one, but still judge the rest in the category.



One thing that I’d like to draw strong attention to (irrespective of whether we judge categories based on expertise or not) is ensuring that the reviewer teams are balanced and ensure the diversity of stakeolders and regions.



2. On evaluation criteria

Just after our face to face MAG meeting, business members on the MAG shared a proposal to the ad-hoc working group, including a number of recommendations for the evaluation process. I am sharing that here as well, hoping we can embrace these suggestions.



3. On ranking and mergers

Before we think about this, I think we need to ask ourselves a few questions:

-          How many (or what %) of sessions we accept per basket? Are we accepting an equal number of sessions for each of the 8 baskets or do we want to weigh the distribution of workshop slots/category according to popularity (either in the call for issues or in actual WS proposals)?

-          Are we prioritizing ranking within category or overall score? For example if the highest ranking WS in category A has a score of 3.4, but the worst-scoring WS in category B is 3.7, which one do we accept?

-          Are we thinking of mergers as a solution to boost quality of low-scoring workshops or as a tool to weed out similar workshops? If there are 3 workshops on a very similar topic all scoring very highly, do we accept them all?



I think, if we go ahead with grading all workshops in a category, MAG members should be encouraged to compare workshops to each-other and grade on a bell curve – which would hopefully help select between similar workshops. Perhaps a middle ground can be reached in proposing that some of the top-scoring proposals from each basket go forward. This might already ensure that diversity criteria are (at least in the most part) respected, since non-diverse workshops are unlikely to score high. Then we should further discuss how to fill remaining slots taking into account ranking in the category, overall score and potential to merge.



With many thanks and kind regards,

Timea


From: Dr. Rasha Abdulla <rasha at aucegypt.edu<mailto:rasha at aucegypt.edu>>
Date: Fri, May 11, 2018, 7:42 PM
Subject: Call for input
To: <wgwseval at intgovforum.org<mailto:wgwseval at intgovforum.org>>

Dear WG members,
We need to move fast on some of these issues since we need to update the MAG during our next meeting on Wednesday. I had listed some of these issues before but will try to be more concise here taking your input into consideration.
1) It seems one main issue before us is how to use the baskets and issued we now have to guide the program. I was under the impression that the ad-hoc group that worked on this bit will carry it further, but I think based on Lynn's input that we (this WG) should look at it (Lynn is on this group, so she can correct me if I'm wrong). I believe quite a few of the ad-hoc group members joined this WG as well.
Flavio has suggested that MAG members be grouped according to their specialty or interest, so that each group of MAG members review certain basket(s). Do you think this is a good idea? I'm a little torn and will be guided by your views. Initially we had hoped MAG members would be individually randomly assigned to workshops to avoid any biases. That did not work last year (because of technical reasons), and instead MAG members were divided into groups, and then the whole group was randomly assigned workshops. Which do you think is a better direction? To judge based on expertise brings obvious advantages, but might also bring some biases. To judge randomly reduces the chance of bias but we lose the potential advantage of comparing same-basket workshops. There is also a danger of relatively downgrading workshops that might have scored higher if the grading was absolute rather than comparative. What are your thoughts on that?

2) I'm still concerned about the third phase of evaluations, which seems to lack any criteria beyond diversity. How can we make the selection at this stage less subjective? I don't think MAG basket groups (if we go that route) would be helpful in this stage, because each basket will already have some workshops advanced. Instead, the MAG usually looks for groups of people who have been underrepresented. (Unless there was an issue that a MAG group wants to bring forward, which is of course still welcome). In all cases, are there other criteria to judge by at this stage?
3) How can we deal with mergers this year given that we want to limit the number of speakers on each (panel) session? Should things be done any differently?

4) How can we publicize/encourage the use of the community collaboration space?

Looking forward to your input.
Best regards.
Rasha

Rasha A. Abdulla, Ph.D.
Professor
Journalism and Mass Communication
The American University in Cairo
www.rashaabdulla.com<http://www.rashaabdulla.com>
Twitter: @RashaAbdulla
<http://twitter.com/rashaabdulla>




More information about the wgwseval mailing list