This is TikiWiki CMS/Groupware v1.9.11 -Sirius- © 2002–2008 by the Tiki community Thu 22 of Feb, 2018 [14:28 UTC]
Last actions


Final observations on EPSO’s replies to the Ombudsman
Legal Cases
Mr Jaime Royo Olid
EU-Delegation to Sri Lanka & the Maldives
26, Sir Marcus Fernando Mawatha
Colombo 7, Sri Lanka

Colombo, 10 June 2012
Complaint: 756/2011 (MHZ)RT
Subject: Final observations on EPSO’s replies to the Ombudsman
Ref.: Your letter dated 09/05/2012

Dear Ombudsman,
Please find in Annex 1 my detailed observations on EPSO’s reply to your request for clarifications, which I understand reached your services after the due date of 30 April 2012. I also sum up my personal conclusions herewith.
My underlying allegation against EPSO concerns the margin of error inherent to the pre-selection procedure, which is inevitably greater than required for EPSO to statistically ensure the actual 1% best performing candidates are selected.
Despite not acknowledging it, EPSO has implicitly recognised sufficient error confirming my claims (e.g.: the cumulative effect of CBT items cancelled whereby for the 2010 General Competition candidates faced between 9.6% to 100% probability of confronting at least one erroneous question sufficient to disqualify them; and the proxy 4 of 20 wrong questions in EPSO’s on-line CBT facility). In addition, it has failed to provide evidence or studies proving that the procedure could ever be statistically rigorous and has recognised not even bothering about it: EPSO has translated CBT questions without comprehensive psychometric testing hence clearly out of control in the resulting difficulty standards. In any case, EPSO’s claims over the number of neutralisations is artificially lower than it should be since “maladmistration” prevents candidates from reviewing them.
Through this process I have provided an indicative framework to analyse the inherent margin of error in the pre-selection process considering the sum of the incidence factors such as: erroneous questions used, difference in difficulty-level across supposedly equivalent tests, candidates answering questions randomly and the discrepancy in the performance of the same person. EPSO fails in proving understanding of any of these parameters.
I trust that the Obudsman will conclude that EPSO has failed to prove its case. This would not imply the full discredit of the tools used but rather of the way they are used. With practically no financial burden the CBT could be reconfigured to accommodate inherent margins of error. But the underlying problem lies in that eligibility requirements are irrationally low and substituted by the CBTs questionable capacity to predict staff performance. Hence, beyond EPSO, Human Resources policy should substantially increase candidates eligibility requirements not only to make EPSO’s task manageable but to match the EU’s in-house staff competencies.
Sincerely yours,

Jaime Royo Olid
Encl: Annex 1, Annex 2 and Annex 3.
Cc: M. Ashbrook, SID President, BECH B2/327, European Commission, Kirchberg, Luxembourg.

Annex 1: Detailed analysis on EPSO’s comments on
the Ombudsman´s request for clarifications (Complait Nº756/2011 (MHZ)RT)

EPSO’s last reply follows the same line as previously in providing:
1. Incomplete answers: EPSO does not answer several crucial clarifications as requested by the Ombudsman such as:…..
2. Quantitative evidence proving CBT does not select best performers: The data it has provided is (deliberately?) incomplete yet sufficient to statistically confirm an order of magnitude of the margin of error inherent to their CBT system greater than EPSO would require to preselect candidates fairly;
3. On CBTs quality:
a. EPSO’s qualitative claims are unfounded since it uses “maladmistration” to prevent candidates from challenging questions;
b. Proxies on the quality of questions indicate between 5% and up to 20% of CBT questions are erroneous: concerning the only publicly-available proxies which prove the deficient quality pre-selection questions, EPSO admitted an error in 1 out of 20 but does not even attempt to explain why it considers the other 3. EPSO fails to defend what is meant to be a logical argument. EPSO;
c. EPSO’s procedural merits are self-acclaimed and not supported by studies, statistical models nor by candidates’ accounts: numerous participants whose complaints to EPSO have been ignored all argue that CBT questions translated from English are of questionable quality;
d. The translation of CBT to al EU languages has been undertaken with no psychometric consideration hence testing across languages cannot be considered equivalent. Translating CBT questions from English to 22 other languages would require tremendous psychometric testing before anyone could legitimately claim the equivalence of difficulty standards across languages. (e.g. English texts translated into German are significantly longer and hence a German speaker confronts harder time pressure).
Specific comments on EPSO’s replies
Concerning EPSO’s reply to Question 1:
What is most important about the neutralisation of questions is not so much what happens once they have been neutralised but rather if the number of questions exposed to verification is proportional to the number of erratic questions. Clearly, it is not the case since questions cannot be reviewed by candidates and considered maladministration by the Ombudsman's decision on own-initiative inquiry OI/4/2007/(ID)MHZ.
Concerning EPSO’s reply to Question 2:
Again, EPSO does not bother reasoning out why three out of the four questions from its on-line CBT facility I challenged are correct. Having accepted an error in one of the twenty questions is a clear proxy indicating at a 5% of questions being erratic. Since EPSO has failed to defend the soundness of the rest we can legitimately assume that 4/20 questions of that CBT are problematic. This indicates that 20% of the CBT could be erroneous if EPSO’s verification standards are the same.
Concerning EPSO’s reply to Question (a):
The reference to the period 2006-2007 is irrelevant for the current CBT since then all candidates had exactly the same questions and hence an erratic question had the same incidence on all. Only asymmetric differences in translation could be argued as having induced fairness problems. It is nevertheless worth commenting that 53 neutralised questions out of “over 12,000” is not 0.65%. EPSO’s numerical skills here are clearly questionable. If the database of questions was the same between 2006 and 2008, the total number of neutralised questions was 53 + 9 = 62. This represents 62 out of “over 12,000” questions implying up to 0.52% of questions were wrong. The figure seems to me surprisingly low since I can credit myself responsible for cancelling at least 2.
Concerning the period from 2010 onwards, following the reform whereby candidates sit supposedly equivalent tests based on different uestions, the data provided is terribly misleading (deliberately so?). The table provided by EPSO is:
1. Incomplete: takes account of half of the competition cycles to date;
2. Inaccurate: it does not consider the cumulative damage of items which were only neutralised after being used in previous competitions;
3. Distorted: it does not distinguish items which can be challenged such as ‘verbal’, ‘numerical’ and ‘abstract reasoning’ from those which cannot due to their subjective nature such as ‘abstract reasoning’ and ‘professional skills’.
On the basis of the incomplete, inaccurate and distorted data we can nonetheless calculate that the cumulative number of questions neutralised for the General Competition of 2010 was at least between 6 to 43 out of 2540 items. That is between 0.24% to 1.69% of questions as illustrated below.

Cycles accounted by EPSO a) Nr of questions neutralised b) Nr of CBT items c) Nr of non-verifiable items (40/80 or 40/120 items for Situational judgement, 40/120 items for professional skills) d) Nr of verbal, numerical and abstract reasoning items e) Potential cumulative Nr of neutralised items after being used in 2010 General Competition f) Potential cumulative share of neutralised questions (e/1693)
2010 AD General cycle 6 2540 0 2540 6 0.24%
2010 AD Linguist cycle 8 3526 0 3526 8 0.23%
2012 AST cycle 8 8898 5932.00 2966 8 0.27%
2011 AD Generalist cycle 21 6364 3182.00 3182 21 0.66%
Total 2010 Gen comp 2540 43 1.69%

But since EPSO has not accounted for half the CBT testing cycles the figure could well be double: up to 3.38% of questions. Cycles unaccounted for include:
• 2010 Specialists cycle
• 2011 AD Linguist cycle
• 2011 AST cycle
• 2011 Specialist cycle
• 2011 CAST cycle
• 2012 AD Generalist cycle
• 2012 CAST cycle
Assuming the range above (between 0.24 to 3.38%) the probability of any candidate of the General Competition 2010 of getting at least one erroneous question is in simplified terms the above range x 40 questions that is: 9.6% to 100%. This implies that at least 9.6% and up to 100% of candidates fate depended on whether they got the right set of questions independently of their performance.
In my case, since it seems that I did not get any of the 6 questions which where neutralised as part of the verifications of the 2010 General Competition, the probability that I was exposed to questions neutralised in later competitions can be calculated as between 0 to 37 items out of 2,534 (0% to 1,46%) x 40 = 0 to 116 > 0 to 100%. Since EPSO is not accounting for half the competitions we are more likely to be closer to 100% than to 0%.

Concerning EPSO’s reply to Question (b):
EPSO describes what is contractually “intended” with the contractor (Prometric I assume) rather than confirming the verification of the facts.
Without undermining whatsoever the competencies of my colleagues from DGT, EPSO misses the point completely in believing that you can simply translate a CBT questions, which tests candidates under very tight timing, and that the level of difficulty will be equivalent. A very simple example of this is the translation of texts into German which unavoidable result in longer questions. 20 longer questions implies that the German speakers are exposed to tighter timing constraints. There are many other features of languages that make translations as an imprecise exercise. Whereas I am convinced that DGT has more than excellent quality control systems to translate texts for the everyday use of institutions (where readers are not tested one particular text comprehension under 2 minutes), they cannot forcibly undertake the necessary testing of their behaviour in testing conditions.
EPSO seems to have only done some pre-testing on a voluntary basis of part of the questions through CAST CBTs. That is clearly insufficient in all possible ways.
Concerning the ex-post

The number of ex-ante verifications undertaken by the Selection Board get to review could not possible involve a reasonable number of CBT questions.
The underlying result of this is that de facto there is no difference between the 2010 General Competition which was proved illegal under the Pachtitis Case, since the pre-selection procedure was conceived by EPSO not being legally comptent and that the Selection Board does a mere secondary and ex-post partial verification of samples.
In addition, further reducing the number of questions actually verified, EPSO filters most complaints from candidates under the excuse that the complaints are not specific. This is a crude form of cheating candidates since not being allowed to review the questions it is difficult to remember the precise formulation problem. Please find in Annex 2 an example of such case occurring to me.
A quantitatively objective proof of the fact that verifications undertaken is lower than required is the number of questions which have been neutralised after having been used in several CBTs. As quantified below ::::

Concerning EPSO’s reply to Question (c):
A Rasch model is no guarantee of the objectivity of questions. It is simply a means to analyse the behaviour of questions. There is no short-cut to review the quality of questions. Parametres the Rasch model may use are time and whether answeres are right or wrong. Hence, the use of a Rasch Model does not simplify the challenges of verbal analysis refinement required but it can hint at questions, which show candidates tend to take more time to resolve or to get right. But that CBT data is in itself rather misleading in so far as since there are only 4 possible answers candidates tend to randomly answer the most difficult questions. It is difficult to use a Rasch model to distinguish a difficult question from an erroneous one in the current CBT set-up. This is clear in EPSO calling “of more difficult standards” what is clearly erroneous such as the 3 on-line CBT questions I challenged and which they have failed to defend.
Administratively speaking, the only legitimate means of quality control is exposing questions to candidates review.
It is clear that EPSO has not demonstrated by any means that the methods are statistically sound and that all available evidence indicates margins of error, which make their method unfair. EPSO mainly refers to its intents not to the verified outcomes of the same.
However, after thoroughly studying and consulting with statisticians on the matter, I am now convinced that EPSO is practically forced into ‘maladministration’ for as long as it is being asked to do the impossible. As I have previously pointed out it is practically unworkable for any institution to test over 50,000 candidates at once by means of a minimalistic assessment in order to recruit the alleged top 1% with questions, which are translated into 23 languages.
Whereas EPSO could make a more sensible use of CBT questions as I have previously suggested, I believe that above all Human Resources policies could at no cost be improved such as to bring coherence to recruitment. Coherence requires matching the eligibility requirements to competencies actually present among EU staff but also to make recruitment simply manageable. This can be achieved by:
1. Increasing required studies level (e.g: at least 5 years or Master-level for AD);
2. Increase professional experience requirements (at least 5 years for AD);
3. Recruit staff first on a contractual basis and only later through merit, achievement and considering appraisals, promote staff to permanent and higher categories;
4. Increase the foreign language requirement to match the actual needs of the European service (at least one C2 and another B2 level in either English, French or German in addition to a third B2 level EU language).
The above is the minimum necessary to be coherent with the actual level of expertise and competency of staff in service and would reduce the unnecessary economic cost and thousands of deceived people being tested every year through a testing method which does not make anyone wiser.

Created by: ashbrmi last modification: Saturday 28 of July, 2012 [11:34:00 UTC] by ashbrmi

wiki page: EPSO Laureate 01 · EPSO 03 · Manipulating?Internal?Competitions ? · X AD 229 11 EN · X AD 229 11 FR · Steps CA 01 · Has EPSO Failed · Same Job but at Half Salary · Manipulating Internal Competitions ? · Errora EPSO 01 · Errors EPSO 01 · Fastest Test in the West EPSO/AST/111/10 · No discrimination for Contractual agents means no frustration amongst them · AN DIE TEILNEHMERINNEN DES ALLGEMEINEN AUSWAHLVERFAHRENS EPSO/AD/177/10 VERWALTUNGSBEAMTE AD 5 · A TUTTI I PARTECIPANTI AL CONCORSO GENERALE EPSO/AD/177/10 AMMINISTRATORI AD 5 · TO ALL PARTICIPANTS OF GENERAL COMPETION EPSO/AD/177/10 — ADMINISTRATERS AD 5 · Contre la Discrimination dans le cadre des Concours Internes – 2-ème Pétition · A TOUS LES PARTICIPANTS AU CONCOURS GENERAL EPSO/AD/177/10 · EU2 internal competitions-whose interests do they serve · Complaint regarding general competition EPSO/AD/177/10 · A Day in the Life: Contract Agent in a Delegation · All you ever wanted to know about SID · What are the grievances of the Contractual Agents · La vérité sur les concours internes EPSO · Concours internes – toujours discriminatoires · Petition Against Discrimination in Internal Competitions · Les nouveaux Concours Internes EPSO Discriminant les Agents Contractuels · Zanizanie stopni funkcyjnych pracownikow Komisji · Degradando al personal de la Comision · EPSO Tests for Contractual Agents Sources for the Quantitative and Verbal Reasoning Test · Devaluation du personnel de la commission · Downgrading the Commission staff ·

Current events
Powered by Tikiwiki Powered by PHP Powered by Smarty Powered by ADOdb Made with CSS Powered by RDF
RSS Wiki RSS Maps rss Calendars
Powered by Tikiwiki CMS/Groupware | Installed by SimpleScripts