|
Post by Ace Midnight on Mar 20, 2014 10:25:38 GMT -5
People have self-reported exclusion from phases and being left off the final register - why on Earth would we not expect SOMEONE to admit to a score lower than 60 if those scores were, in fact, being handed out.
Therefore, the only conclusion I can draw is that this email does not mean what it says. It CANNOT be a percentile. Percentiles would be evenly weighted throughout. The 2 digits is even more confusing.
We have scores up to 100, but broken down into hundredths? How can that be a percentile? If ~1000 are on the final register, you can do that, easily, with tenths. The lowest scorer would be 0.1 The highest would be 100.0, and you would have them evenly spaced at 0.1 intervals, ties notwithstanding.
But, we have scores from ~60.00 to ~100.00 (at least theoretically) - now, if we're considering "percentile" all of the people who were excluded after D.C. because of a lack of qualifying score on WD, SI or both - maybe, and the hundreths could represent the ties. But, again, people not on the register will not be considered for the job, so this doesn't make sense, either.
This is either a final numerical score on a range of "1" to "100" and the language in the email is inaccurate, or this is a "percentile" calculation based on factors of which we're not aware - this CANNOT be a percentile of people with a final numerical score and it is unlikely to represent a percentile of those who went to D.C.
I read it, I don't understand it, but math is math.
|
|
|
Post by luckylady2 on Mar 20, 2014 10:30:11 GMT -5
BTW, if my theory is right, sorry funky, it would mean that all the folks around the 75ish mark are in the 50th percentile of People-On-the-Register, and the 75th-ish percentile of All-Applicants.
|
|
|
Post by anotherfed on Mar 20, 2014 10:34:59 GMT -5
Have we no statisticians on this Board who can explain this?
|
|
|
Post by Ace Midnight on Mar 20, 2014 10:52:14 GMT -5
anotherfed - interesting you should ask for statisticians - I just posted on the polling board that I'm not a statistician - but this CANNOT be a percentile - at least not as that term is widely used - not a percentile of people with final numerical scores.
On the contrary, it looks very much like a range of 60 to 100, with a bulge at 73-74 (maybe up to 76) - quite a large bulge which suggests an ordinary, expected, bell curve of raw scores. A percentile graph should be much flatter, depending on how you resolve "ties", as for every 100 there is a 1, 99 a 2, etc., so the "by head" distribution will be much more even.
In this case we're dealing with, apparently, only "passing" scores.
Let's do the analysis - I am discounting any score below 60 and above 90 (at this point).
Also, let's assume that those who failed to make a qualifying score on either (or both) WD or SI did not receive "0" points, and they might represent a cluster in the 60s - but that is speculation.
The people who scored just barely enough on both sections (and ignoring the LBMT for now - let's assume that has a negligible effect on overall scores) - that is your low 60s. People who did really well on either and poorly on the other, OR performed in the middle for both - all of those people should cluster in the 75 range, or so - about 15 points from the bottom, 25 from the top. The people that got above 80, appear to represent about 13% to 15% of the polling data (if this were a percentile, shouldn't that number be much closer to 20? 100-80=20, right) Those are people who did better than average on both tests, or really well on one, and about average on the other, or have VP points (or some combination).
Obviously this isn't perfect analysis, but I can't make the data we've collected fit the "percentile" hypothesis. Our data is flawed to the point of unuseability in that case (and I simply cannot believe that is true.)
|
|
|
Post by JudgeRatty on Mar 20, 2014 11:26:49 GMT -5
I don't think it is as complicated as we are making it. It doesn't seem to be a percentage or on a set "rank." It is a "rating" that is listed as to how you are relative to others. There is still a bell curve as everyone has pointed out, and NO where does it say "rank" or "percent" in the email. Those words are missing. They use the term rating. Some better some not, so you are "relative to others" in that sense. I bet the reason that no one is reporting 1-40 or so is that to be in that range, you would not have had the minimum on the SI/WD. So even though the email said "1", that was just an example? I think so. Then again, maybe there are some 1-40 scores out there and we will never know, but my bet is that if there are those in that low level of score range, they are likely few and far between, which would not significantly affect the bell curve.
|
|
|
Post by Ace Midnight on Mar 20, 2014 11:36:17 GMT -5
It doesn't seem to be a percentage or on a set "rank." It is a "rating" that is listed as to how you are relative to others. There is still a bell curve as everyone has pointed out, and NO where does it say "rank" or "percent" in the email. Those words are missing. Absolutely correct: "In this scale, the lowest possible score is 1 and the highest possible score is 100. A score of 1 would mean only that this individual had the lowest rating relative to other candidates on the register (or was tied for the lowest rating), not that s/he "failed" the examination or was unqualified in any way. Candidates placed on the register not only have been determined to be qualified but also have been drawn from the high-scoring group of those who took the online assessments, so their ratings signify only how they placed relative to others on the same register.." And I agree that the "1" is likely an example. In our case, "60" seems to represent the lowest score (or that's our best evidence - at least "around" 60) of those on the register, and the upper scores appear to be in the upper 80s to mid-90s (although I'm still not sold on a significant number in the 90s - although a good score, with VP, it is possible). The distributions seems to put the 50% mark right in the 73, 74, 75 range. Because we have 1000 (or so), the 2 digit resolution of the scores seems to allow for relatively sharp breaks in there, as well. My estimates are that, roughly 15% are 80 and above, and roughly 15% are 66 and below. The "middle" 70% is roughly 67 to 79. ETA: Further resolving the top 25% to be 78(ish), and the bottom 25% up to 69(ish), with the middle 50% being 70 through 77 - again, all estimates based on the poll.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Mar 20, 2014 11:38:07 GMT -5
1
|
|
|
Post by sandiferhands (old) on Mar 20, 2014 11:44:38 GMT -5
Have we no statisticians on this Board who can explain this? Again, anotherfed and I are sniffing the same trail. I think we may have some confusion about exactly what a "percentile" rank is. A percentile merely identifies the percentage of all sample members who scored above or below a specified score. It is specifically NOT a "flat" curve, but is expected to be more or less bell-shaped. Thus, the numbers we were emailed could very well be a percentile rank, as well as the smaller subset of those responding to our polls, as both would be expected to be a bell-shaped curve. It appears some in this thread may be confusing this with a "normal curve equivalent", which would be a flat, one-person-per-slot ranking. wiki percentile rank explainedAs far as how the poobahs got there, I have no idea. There are too many variable we just don't know. But unless the author of the email was somehow confused, it seems pretty clear that the numbers we received were a percentile rank, IMHO.
|
|
|
Post by robespierre on Mar 20, 2014 11:56:35 GMT -5
I don't think the score is the total of points or a percentage. We were scored on 13 competencies, equally important. We don't know the total points available for each competency, but I'm going to guess it was more than 7.69 (100/13) and that a perfect "competency score" was a number much higher than 100. Then, to achieve a numerical rating between 1-100 (sans vet points), a mathematical calculation occurred. That calculation could have been a simple conversion with the total competency score being reduced by a multiplier, or perhaps the different parts of the test were weighted and different multipliers were used. Either way, after [insert mathematical equation here], our "raw" (competency) score fit neatly into a 100-point scale. Then add vet points and the register is set.
I read (I believe it was an exhibit to the Azdell litigation) the different ways OPM has scored this exam in the past. This e-mail seems like OPM's attempt to explain that they have addressed those scoring issues. And while I don't think anyone on the register has a score of 1, that may be OPM's way of addressing their past practice of giving 70 points to everyone who tested so they would be considered "qualified." (Again, from the Azdell litigation.)
OPM e-mail: "The competencies measured in the assessment process were combined to determine a total competency score. To compute the total competency score, the following steps were taken:
The competencies were made equal in importance, so that each competency carried equal weight,
The points earned on each competency were added together to determine the total competency scores for each applicant, and
The total competency scores of all applicants were put on a 1 to 100 scale to establish each applicant's numerical rating, excluding veterans' preference. In this scale, the lowest possible score is 1 and the highest possible score is 100."
Yes, I agree about the multiplier. They took the raw total competency scores and applied some sort of multiplier to force the numbers onto a 1-100 scale, with the lowest scorer at 1 and the highest at 100. (That's what the e-mail means when it says "the total competency scores of all applicants were put on a 1 to 100 scale.") The results naturally form a bell curve, because that's what you get when you plot the results of a fair test objectively scored. It's not a perfect bell, but that's because (a) nothing's ever perfect, and (b) the vet points were added at the very end and tend to distort the bell (that's what the e-mail says if you read it carefully; the vet points were added AFTER the numbers were put on a 1-100 scale ... this gives the vets maximum benefit and wards off litigation). There probably actually were scores given out like 1 or 6 or 20. But those would be the extreme outliers (you get them in any bell), so just a few of them exist, and naturally they wouldn't advertise the score on this board. Bottom line: The scores were placed on a 1-100 scale via a multiplier of the raw scores, not by creating a percentile ranking. I feel strongly about this.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Mar 20, 2014 12:03:43 GMT -5
Hey, ODAR, I've pretty much given up trying to decipher OPM's "helpful" explanation of our ratings. Just shoot me an email if you think I might possibly be ALJ material.
|
|
|
Post by moopigsdad on Mar 20, 2014 12:25:32 GMT -5
I think OPM's email explanation of scoring is as clear as MUD. As can be seen by the multitude of explanations by previous posts there are many different interpretations as to what OPM meant by it's email. It seems to me they have muddled, rather than clarified how scoring was done by their use of terms and explanation. I am not going to give it any more thought as at least I have a score and made the Register. It is this type of email that leads to possible court action due to the many possible interpretations that one can decipher from it. I am not going to think about it any more, although I think there were a lot of plausible interpretations given by many posters to this Board.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Mar 20, 2014 12:29:37 GMT -5
1
|
|
|
Post by moopigsdad on Mar 20, 2014 12:34:23 GMT -5
There was no percentile ranking. There was no multiplier or mystery calculation. To place any set of numbers on a 1-100 scale, you simply divide each number by the largest number and move the decimal point 2 places to the right. It works for numbers less than 100 also. 88/88 = 1.00 converts to 100, 77/88 = 0.875 converts to 87.50, etc. There is no mystery to the conversion. The only reason they had to convert the raw score to the 1-100 scale was to provide the Veteran's Preference the proper weight as was previously litigated. The only problem with that scenario is some scores wouldn't match up with your analysis. There are scores of 73.02 and 68.98 and 78.36. Hence, those breakdowns in the 100ths of a point would not be possible with any division of scores that make sense, at least not as you are explaining it in your post. Furthermore, under your scenario there would be a lot of people with the same exact score and I think very few if any have the exact same score of points.
|
|
|
Post by anotherfed on Mar 20, 2014 12:36:14 GMT -5
Thank goodness -- gotta love Wiki. Now to really screw with the statistically challenged, shall we discuss standard deviations??? Muah ha ha ha.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Mar 20, 2014 12:40:39 GMT -5
1
|
|
|
Post by gary on Mar 20, 2014 12:42:35 GMT -5
That may have to do with them having 13 equally weighted components. The scores on each component have to be adjusted to a scale where a perfect score is 7.69230769 before it is added to the others to yield the total score on a scale where 100 is perfect. Then VP points would be added. OPM may have computed scores to more decimals than they report, or rounded to hundredths before computing the final scores.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Mar 20, 2014 12:45:19 GMT -5
1
|
|
|
Post by gary on Mar 20, 2014 12:50:43 GMT -5
Alternatively they could score each component on identical scales and do the conversion. Then add VP points. I don't know as it would make a difference in the final scores.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Mar 20, 2014 12:52:03 GMT -5
1
|
|
|
Post by sealaw90 on Mar 20, 2014 12:54:53 GMT -5
Gary, you are repeating the mistake of others in adjusting the components to equal 100. That was not what was done. The components were added, then the total raw score was converted as I explained earlier to a 1-100 scale. It is a simple process, no mystery math or complicated formulas. Wannabee, this is my interpretation of what you're saying; This is what OPM means by rating each applicant with the other applicants. The absolute top score on the test, whatever it was, is the one person who sets the denomniator for the rest of our scores to be divided by. For example, one of us (NOT me) got a total score from all testing components and all competencies of 1,300.00. Their score is divided by itself to get 1, move the decimal over two places to the right to get 100. The rest of our measily scores are then divided by 1,300, move the decimal over two places to the right and there's your score. Under this scenario, you will never be able to find out your "raw" score unless you know the top score for all applicants. Correct??
|
|