[IGFmaglist] Proposal for Modification for the Workshop Review and Evaluation Process
schalmers at ntia.doc.gov
Tue May 31 14:21:52 EDT 2016
Dear Chair, MAG members,
Thank you for the opportunity to share comments on the proposal ex officio, which I offer as a former MAG member who was appointed by the previous chair to develop written criteria for workshop evaluations. I will not be able to attend the call so am sending comments to the group at large.
Note that I have not seen this proposal before now, though I have shared with Rasha reflections on her general ideas in the past few weeks.
While this proposal would be an excellent place to start discussion for 2017, I would caution against adopting it this late in the organizational process. Explanations are interleaved with the proposal, below.
At this point, relating to the evaluations, I would suggest that the MAG spend its time and energy preparing for the third stage of the process, the in-person evaluation. The proposal text is in a different color:
Concerns with the current system:
1. All MAG members are asked to review all workshops, which is a considerable investment of time and resources.
-- True, reading +/- 200 workshop proposals, and grading them thoughtfully, is time consuming. MAG members should allocate a few evenings, and perhaps a weekend day, to complete this work. But evaluating workshop proposals is a core MAG responsibility, which comes with privilege of being appointed as a MAG member.
With that said, if the number of workshop proposals grows rapidly in the years to come, a ‘system’ like the one discussed in this proposal may become necessary (provided that the size of the MAG remains constant, etc). However, creating a new system also means developing the “back-end,” which demands time and resources from a very busy Secretariat. At this stage during the process, especially in light of the retreat and other activities, I would imagine that this would be a heavy lift for them. (More on this point below)
Note that, by reading all of the proposals, MAG members are able to evaluate one proposal against any other given proposal, where this is not possible if proposals are graded in sets by different MAG members. This could lead to imbalance across the different workshop proposal sets – for example, a large number of proposals scored highly relating to the same topic, across a number of sets, which would have to be cured through a subsequent process and discussion in plenary.
2. Members rate every single workshop on 10 criteria, giving it a single score from 1 to 5. It is impossible to provide operational definitions for the scores given if one score is used to reflect 10 different criteria.
-- In the time spent speaking with different MAG members who have been through this process before, I have come to realize that MAG members, in the end, develop their own criteria and personal approach when evaluating proposals. We have looked at giving each consideration a score in the past but decided against this approach for this reason – a general score based upon guideline considerations is a more realistic approach.
On this point as the proposal condenses (not revises) 10 criteria into 4, with multiple questions and considerations included within two of the four “considerations,” it is unclear how this approach would actually solve the problem of “nuance” it originally sets out to achieve.
It seems that the trade-off here involves the following: under the new proposal, any given workshop proposal will receive a more “thorough” consideration by each MAG member, but only 8 out of 55 MAG members will be evaluating the proposal, as opposed to having 55 (though not all MAG members evaluate workshops) MAG members read all of the proposals, and simply provide a 1 to 5 score. Under the present situation the MAG is fully informed of the entire corpus of workshop proposals and can evaluate them in plenary, together. Under the proposal, the MAG would need to reconcile the differences in each “set” that was graded for an overall balance in programming. This is possible, but the change suggested in stage 2 of the process needs to have a proposed balancing process for stage 3 as well, in order for the MAG to successfully evaluate the workshops during the time available at the meeting.
If the issue is giving more “nuance” to each proposal evaluation, recall that there is an opportunity to provide feedback and comments on the proposal and MAG members are encouraged to do so especially in the event of a low score.
3. The scores given are therefore subjective in nature, and one proposal would get quite different scores depending on who gets to rate it, which is why MAG members feel they all need to evaluate every workshop.
-- Naturally, the scores are subjective and naturally, one proposal can get a quite different score from one MAG member to another. This is because different things are important to different members and their constituencies. This will not change if MAG members are given less workshops to evaluate and required to grade different aspects of the proposal.
* Focus the criteria used for evaluation to the following:
** Is the proposal relevant to Internet Governance and to the IGF2016 main theme?
** Is the proposal well thought out and covers enough aspects of the issue of concern? Is the main Internet governance issue clearly spelled out?
--Note that these are two separate considerations being graded by one score.
** Is the list of speakers diverse enough (in terms of gender, geography, stakeholder group, policy perspective, and/or persons with disabilities)? Are the speakers qualified to tackle the topic? Are there speakers from developing countries?
--Note that criteria contains three separate points for consideration graded by one score. Note especially that, under the existing process, the Secretariat flags proposals from developing countries for MAG member evaluation, and these proposals are generally encouraged. Therefore this criteria could have the effect of undermining the criteria established by previous MAGs to encourage these proposals.
** Is the Workshop description consistent with the format listed (for example, if the format is Debate, then does the proposal describe how the debate will be set up, with timings, etc., indicated; are all sides of the issues represented)? (Innovative formats are encouraged).
* The secretariat will provide information on whether or not this is a debut (first time) proposal. There could be a separate pool for debut presentations, or a certain number of points could be added to a debut presentation. Such point value would be determined for the first year once all the scores come in. This could be done at the NYC MAG meeting.
Should the first time proposal automatically receive a point? A half of a point? What is appropriate?
Note that under the present criteria MAG members are encouraged to support first time and developing country proposals in the grading, and that they take these into account relative to other workshop proposals.
* Each reviewer will give a score from 1 to 5, with 5 being the best possible, on each criteria. The system will calculate an average score for the proposal based on the four scores given. MAG reviewers should provide brief feedback to the proposers if the score given is below 3.
* Each proposal is to be randomly routed to 8 MAG members, 2 of each stakeholder group. If an evaluator cannot do the evaluation for any reason (possible conflict of interest, lack of experience in topic, etc.), the evaluator can indicate that on the system, and the system can randomly route that proposal to another member within the same stakeholder group.
How will the workshop proposal be randomly routed? What would this require of the Secretariat? Is the existing system capable of doing this, or will it be done manually? What if a MAG member receives an unbalanced “set” of proposals – all coming from developed countries, for example, or relating to encryption? How will the evaluator be able to understand the different topics that are being graded in the other sets and be able to give its workshops a relative value in light of all workshops submitted?
* Given that the MAG is expecting around 250 proposals, and that each proposal will need 8 evaluations, this will be a total of about 2000 evaluations. This should average out to about 36 proposals per MAG member. Given that members of the different stakeholder groups are not equal, and that some evaluations will be re-routed, some members will end up with slightly more proposals than others. However, if this works out correctly, no member should end up with more than 40 or so proposals.
The implementation of this needs to be taken into account. How will evaluations be re-routed? Based upon which criteria?
* With the approval of MAG members, this system could be tried out this year, and further enhancements could be implemented for next year.
The proposed system has the following advantages:
* The criteria are not different from those already published on the MAG website. They are just clustered and grouped differently. This means that workshop proposers will not be harmed in any way by the proposed changes.
This is not entirely accurate, as explained above re developing country/first time proposal preferences.
If the point is to provide “operational definitions” for each consideration, how is this solved by grouping multiple considerations together?
* Giving an individual score to four criteria rather than one score for ten criteria enables MAG members to evaluate proposals more accurately, and makes it clear what factors distinguish each proposal over others.
As explained above, I do not think that this tradeoff will yield a better evaluation process and it does not address the problem that this point seeks to solve. In fact, it is important to note that “distinguishing proposals” from each other based upon score has not been a problem in the past, so it is unclear why it is necessary to create a solution when there is no need for one.
* Giving an individual score to four criteria rather than one score for ten criteria makes for less subjective evaluations, since each factor being judged is ultimately more defined.
While the intent is understood, the proposal does not go far enough to address what is being sought - objectivity. In theory, a MAG member could still evaluate a proposal with a 1 for each criteria. If accountability is the issue, then publishing the grades is the real solution, though I do not believe that all MAG members would find that to be reasonable for a number of valid reasons.
* If all feedback is given to workshop proposers (including the scores), they would be able to know the strengths and weaknesses of their proposal just by looking at the different scores and knowing which items scored less than others. This would also help them make better proposals the following year.
I do agree that this would be helpful, but this “nuance” issue is already addressed through the comment function.
* Each MAG member evaluates around 36-40 proposals rather than 250, saving much time and resources.
* Each proposal is still being judged by representatives of all stakeholder groups.
These thoughts are offered above for the sake of MAG discussion, and I hope that are useful in determining whether or not to proceed with the proposal. I would suggest that this work be taken up without similar time constraints for next year.
Telecommunications Policy Specialist
Office of International Affairs | NTIA | U.S. Department of Commerce
1401 Constitution Avenue, NW, Washington, D.C. 20230
Desk: 202.482.6789 || schalmers at ntia.doc.gov
From: Igfmaglist [mailto:igfmaglist-bounces at intgovforum.org] On Behalf Of Lynn St.Amour
Sent: Tuesday, May 31, 2016 10:42 AM
To: IGF Maglist
Subject: Re: [IGFmaglist] Proposal for Modification for the Workshop Review and Evaluation Process
Thank you to Rasha, and all who contributed to this proposal.
In order to advance the discussion for our MAG call, the Secretariat and I have tried to address a few of the questions here (where the responses are known)
- Yes, proposals are identified as being submitted by a 1st time proposer (Rasha & Renata)
- the MAG does work towards an ideal number of workshops and this number will be known (Ginger)
- criteria for MAG members participation: emerging consensus seems not to change significantly from last year so little to no impact on WS proposals (Renata)
Open for discussion:
1 - will it be possible to put a proposal in a provisional status or have a second evaluation? (Ginger, Renata)
2 - should workshops be evaluated in groups with similar themes (Ginger)
3 - do we need to assign weights to criteria to signal importance (Zeina)
4 - can MAG members review more than the assigned number of workshops? (Flávio)
5 - what feedback will be shared with proposers: text as last year, average ratings by criteria? (Rasha, Renata)
There are several new suggestions in this proposal and it may be easier to separate them out for discussion/approval:
1 - Increased focus/aggregation of criteria (note: all previous criteria are included)
2 - Scoring on each of the 4 (aggregated) criteria vs. one score encompassing all the criteria
3 - Each WS proposal will be reviewed by a subset of MAG members, proposal suggests 8 MAG members with 2 from each SH group, though this could be increased
If you are not able to make the June 1st call, please send your comments to the list in order to help us assess consensus. It is important that we hear from you.
> On May 30, 2016, at 10:51 AM, Ginger Paque <VirginiaP at diplomacy.edu<mailto:VirginiaP at diplomacy.edu>> wrote:
> Thanks to those who worked on this, especially to Rasha for pushing it. I have to admit I was not sure we could find a way to implement a strategy of not having each MAG member evaluate each proposal, but I like this in general. I have still a few questions:
> 1. Will there be a possibility to put a proposal in a provisional status, allowing for correcting some details, such as balance on the panel (gender/stakeholder/other, if appropriate and suggested); a clear proposal for including remote/online participants, and other necessary details?
> 2. Will we work towards an ideal number of workshops? Should we be thinking about comparing workshops, if we know that only half of them can practically be accepted?
> 3. Should workshops be evaluated in groups with similar themes, so that the 'best' proposals in certain thematic areas can be approved?
> Best regards,
> Virginia Paque
> Upcoming online courses: Humanitarian Diplomacy, 21st Century
> Diplomacy, Diplomatic Law: Privileges and Immunities, Internet
> Technology and Policy: Challenges and Solutions, Multilateral
> Diplomacy. http://www.diplomacy.edu/courses
> On Mon, May 30, 2016 at 7:46 AM, Renata Aquino Ribeiro <raquino at gmail.com<mailto:raquino at gmail.com>> wrote:
> Dear Rasha and all
> Thanks for submitting this proposal on workshop evaluation to MAG
> members analysis.
> Along the main lines of the proposal, it provides a good way to modify
> the evaluation and streamline the process.
> I am in agreement.
> Thanks also to the team who participated on this.
> I do have a few doubts, one of them to Secretariat
> * ****Secretariat please see*****
> From the text:
> "The secretariat will provide information on whether or not this is a
> debut (first time) proposal. There could be a separate pool for debut
> presentations, or a certain number of points could be added to a debut
> presentation. Such point value would be determined for the first year
> once all the scores come in. This could be done at the NYC MAG
> Does the Secretariat have this info?
> * Question to MAG Members (and all who would like to pitch in) From
> the text:
> "If all feedback is given to workshop proposers (including the
> scores), they would be able to know the strengths and weaknesses of
> their proposal just by looking at the different scores and knowing
> which items scored less than others. This would also help them make
> better proposals the following year."
> Is it possible, to have a second chance evaluation for some workshops?
> Could this same feedback be directed to that?
> My ask comes from the fact that sometimes works just need minor
> modifications to be adequate to presentation. Should this be taken
> into account here?
> Also another issue
> As the MAG members are still discussing the adequate criteria for MAG
> members participation, it should be clear whether the rule applied
> would result in modification of workshop proposal and it would be
> important this is in the evaluation process publicly shared.
> On Sun, May 29, 2016 at 7:23 AM, Dr. Rasha Abdulla <rasha at aucegypt.edu<mailto:rasha at aucegypt.edu>> wrote:
> > Dear MAG members,
> > Following the Secretariat's green light, I have finalized the
> > proposal for modifying the Workshop Review and Evaluation Process.
> > This proposal tackles only the second stage of the review process,
> > that of evaluation by MAG members. The first stage (the Secretariat
> > screening), as well as the third stage (final decisions re borderline cases, mergers, etc) remain unchanged.
> > I hope this proposal arrives at a middle ground for this year that
> > takes care of most of the concerns raised. It also reduces the
> > subjectivity in evaluation, and it considerably reduces the work
> > load per MAG member. Many thanks to Flavio, who suggested the work
> > distribution among MAG members, and to Susan for her comments on the
> > whole process. I'm attaching the new proposal on the second stage of
> > reviewing as well as the current document for the whole review process.
> > In the interest of time before our next virtual meeting, and since
> > there was little interaction on the WG mailing list, I'm hereby
> > offering the proposal to the full list of MAG members for
> > consideration. I request that the Secretariat include this on Wednesday's meeting agenda if possible.
> > Best regards.
> > Rasha
> > Rasha A. Abdulla, Ph.D.
> > Associate Professor and Past Chair
> > Journalism and Mass Communication
> > The American University in Cairo
> > www.rashaabdulla.com<http://www.rashaabdulla.com>
> > Twitter: @RashaAbdulla
> > <http://twitter.com/rashaabdulla>
> > _______________________________________________
> > Igfmaglist mailing list
> > Igfmaglist at intgovforum.org<mailto:Igfmaglist at intgovforum.org>
> > http://intgovforum.org/mailman/listinfo/igfmaglist_intgovforum.org
> Igfmaglist mailing list
> Igfmaglist at intgovforum.org<mailto:Igfmaglist at intgovforum.org>
> Igfmaglist mailing list
> Igfmaglist at intgovforum.org<mailto:Igfmaglist at intgovforum.org>
Igfmaglist mailing list
Igfmaglist at intgovforum.org<mailto:Igfmaglist at intgovforum.org>
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Igfmaglist