Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Mar 19, 2014 22:04:09 GMT -5
That's the way I see it, FO. I think our final rating is expressed in terms of percentiles. For example, when you got your SAT scores, you got two numbers: 1500 as a score, and an explanation that you were in the 98th percentile. The more I read the last email the more I am leaning this way. They used the term comparative and how the rating is a value that is compared to others. They specifically point out that an individual with a 1 did not fail but that is how they rank when compared to others on the register. This is quite a different interpretation! Using terms like "relative to others" sure sounds like a ranking rather than a raw score. I note that the subject line of the OPM email was "Understanding Your ALJ Final Numerical Rating," but the content of the email definitely isn't a model of clarity. The result? Certainly not "Understanding [My] ALJ Final Numerical Rating." Anotherfed and sratty, I see how the email can be read to support your "ranking" theory, but I'm skeptical. My hypothesis is that it's simply a screwy explanation in the OPM email.
|
|
|
Post by funkyodar on Mar 19, 2014 22:07:27 GMT -5
From the folks that borught you the LBMT, its a clarification email that actually further muddies the waters.
|
|
|
Post by FlaTreeFarm on Mar 19, 2014 22:38:05 GMT -5
This may be the preemptive strike to break up the onslaught of incoming appeals. As the old saying goes: If you can’t dazzle them with brilliance, then baffle them with B____ S_____!
|
|
|
Post by gary on Mar 19, 2014 22:42:35 GMT -5
As much as I would like to believe our scores are percentiles, I just don't see it. The median score in our poll is 73. If the percentile theory were correct, the median for the whole register would be 50. While I am not sure of the representativeness of our poll's sample or the truthfulness of all it's participants, I don't know of any factors that convince me our poll's scores are 23 percentiles higher than those of the whole register.
|
|
|
Post by sandiferhands (old) on Mar 19, 2014 22:57:58 GMT -5
I have to agree with anotherfed's analysis. Her impressions were mine, also, when I read the email. I think it is intuitively correct that those on this board are a little more plugged into and perhaps passionate about this whole process, and it stands to reason that the scores of those here may be skewed a little higher. It is curious that our distributions seem to make a perfect bell, but that could be explained statistically if our sample is random enough. What intrigues me is the issue of whether this type of scoring analysis (using raw scores to create a statistically bell-shaped curve of percentile rankings) means that the selection process will employ the "category rating" method. There was a discussion of it in the thread below, back in July. I believe the original announcement verbiage is broad enough to allow this method. The Puzzle Palace guys and gals having to work through the mechanics of applying this new rule would also neatly explain the delay in getting our scores to us. July thread on category method
|
|
|
Post by Orly on Mar 19, 2014 23:26:57 GMT -5
From the folks that borught you the LBMT, its a clarification email that actually further muddies the waters. Indeed:
|
|
|
Post by robespierre on Mar 20, 2014 0:08:57 GMT -5
Our poll is so skewed that would should be a straight line comes out as a near-perfect bell? Hard to believe. But maybe you're right; we may never know.
|
|
|
Post by robespierre on Mar 20, 2014 0:21:59 GMT -5
Ok, so another, do I understand your theory correctly? That your score is not the total of your points but instead a representation of what percentage of other people on the register had lower point totals? As in someone with a 75.00 is ahead of 75% of the other candidates and behind 25%? I like that idea. But our polling sure has a lot of people camped out at 73 to 74. Seems to perfect of a bell for it to represent actual individual rank. And wouldn't there have to be some low scores? like really low? Don't get me wrong, I'm sure everyone would love that scenario. A cert of 300 from an approximate 900 reg would mean everyone down to a 66 made the cert. Even lower when gals come into play. of courseit would also have to mean a whole funk load more than 900 on the reg to account for all the people that have to have scores below those reported. But then, like you I became a lawyer cause the most math I am comfortable with is figuring 33 1/3 of a total. It's even simpler than that, Funkmeister. If the scores are a percentile ranking, and there are 200 poll respondents (I'm rounding from the actual total of 228 to make the math simpler), then the poll should show two 92's, two 80's, two 73's, two 65's, two 52's, and so on (at least ideally ... obviously there is always SOME sampling error in a poll). Instead, we've got zero 92's, five 80's, twenty-four 73's, six 65's, and zero 52's. That's too far off. And it's so far off that it's not explainable by this board skewing towards high scorers (which is just a hypothesis anyway). I just don't think these are percentiles. Note that the e-mail carefully avoids the words "percentile" or "ranking."
|
|
|
Post by privateatty on Mar 20, 2014 6:16:52 GMT -5
This may also explain why the scores go out to the second decimal: 73.34 for example. The score is broken down to two decimals so that there are no ties, or very little chance of one. Takes care of those pesky vets too.
|
|
|
Post by 71stretch on Mar 20, 2014 7:43:14 GMT -5
I think this email caused more issues than it solved. A score of 1 doesn't mean you failed or were unqualified in any way? Please. If such a person exists, are they on the register? I doubt it.
|
|
|
Post by minny on Mar 20, 2014 8:00:24 GMT -5
I have to agree with anotherfed's analysis. Her impressions were mine, also, when I read the email. I think it is intuitively correct that those on this board are a little more plugged into and perhaps passionate about this whole process, and it stands to reason that the scores of those here may be skewed a little higher. It is curious that our distributions seem to make a perfect bell, but that could be explained statistically if our sample is random enough. What intrigues me is the issue of whether this type of scoring analysis (using raw scores to create a statistically bell-shaped curve of percentile rankings) means that the selection process will employ the "category rating" method. There was a discussion of it in the thread below, back in July. I believe the original announcement verbiage is broad enough to allow this method. The Puzzle Palace guys and gals having to work through the mechanics of applying this new rule would also neatly explain the delay in getting our scores to us. July thread on category methodCategory ratings replace the rule of 3. If used, OPM could take anyone within Category A, then move onto Category B. In my experience Category C does not go anywhere, but who knows in this type of situation. This was designed to give a lot more flexibility in the selection process. Assuming my agency follows a typical protocol, the applicants do not get a numerical score - they only get the category ranking (A, B or C) in their NOR.
|
|
|
Post by 71stretch on Mar 20, 2014 8:17:09 GMT -5
They are not going to use category rating, according to the announcement.
"It is the responsibility of the hiring agency to make selections from the list of candidates referred for employment consideration from among the highest three available names, taking into consideration veterans' preference and other civil service rules."
The rule of three still applies here.
|
|
|
Post by agilitymom on Mar 20, 2014 8:28:08 GMT -5
The email was much like the logic based test. Break it down in parts:
1. For everyone who took the SJT, WS, and EA (phase 2)...you received a score (which included the 5 or 10 vet pref points, if applicable). Then everyone was placed on a list in order of the score they received (I would assume ranked from highest to lowest in numerical order). OPM then drew a cut line (probably a percentage such as bottom 1/3 or something...we have no way of knowing that cut line) and only those above the cut line were allowed to move forward in the testing process.
2. The above the cut line group moved on to the WD, LMBT, and SI. If you made the minimum score required on the WD and the SI (again, no way of knowing how these parts were actually scored) you received a final numerical rating. Those who did not make the minimum score were removed from further consideration.
3. For those that made the minimum score on both the WD and the SI, you made the register, but OPM wasn't quite done. Next, the points you earned for each competency were combined to achieve your overall competency rating. This rating excluded any vets preference additional points. (Which is why, most likely, the scale was 1-100).
4. Each persons total competency score was then rank ordered based on numerical value of the total competency score. It does not mean that those who were removed from consideration somehow received a lower score than those making the register. It simply means that your total competency score falls in a certain place with respect to others who made the register. I think that was what they were trying to convey when they said "a score of 1 would only mean that this individual had the lowest ranking score relative" to the other qualified candidates who made the register. In other words, whatever your final competency is (high or low) you still made the register and have been determined to be qualified to become an ALJ, if selected. In reality the lowest of scorers will probably have to wait longer for consideration. That hasn't changed from past registers.
5. Your final numerical rating is the sum total of the scores you received on each of the competencies measured (not the total score from a particular component of the assessments we took, because each assessment measured the 13 competencies in some way and you received some kind of rating for each of the 13 competencies, which were of equal weight). For pref eligible vets, your five or 10 points were added to the total competency rating to achieve your final numerical rating.
6. In reality, I think it would be quite difficult for an individual to score a "1" since we were scored on 13 competencies, all of which were equally weighted...so 13 would be the lowest possible score making the register. Of course, if you only scored a "1" on each of the competencies you would not have made the cut at phase 2.
7. Maybe.
|
|
|
Post by BagLady on Mar 20, 2014 8:56:19 GMT -5
I don't think the score is the total of points or a percentage. We were scored on 13 competencies, equally important. We don't know the total points available for each competency, but I'm going to guess it was more than 7.69 (100/13) and that a perfect "competency score" was a number much higher than 100. Then, to achieve a numerical rating between 1-100 (sans vet points), a mathematical calculation occurred. That calculation could have been a simple conversion with the total competency score being reduced by a multiplier, or perhaps the different parts of the test were weighted and different multipliers were used. Either way, after [insert mathematical equation here], our "raw" (competency) score fit neatly into a 100-point scale. Then add vet points and the register is set.
I read (I believe it was an exhibit to the Azdell litigation) the different ways OPM has scored this exam in the past. This e-mail seems like OPM's attempt to explain that they have addressed those scoring issues. And while I don't think anyone on the register has a score of 1, that may be OPM's way of addressing their past practice of giving 70 points to everyone who tested so they would be considered "qualified." (Again, from the Azdell litigation.)
OPM e-mail: "The competencies measured in the assessment process were combined to determine a total competency score. To compute the total competency score, the following steps were taken:
The competencies were made equal in importance, so that each competency carried equal weight,
The points earned on each competency were added together to determine the total competency scores for each applicant, and
The total competency scores of all applicants were put on a 1 to 100 scale to establish each applicant's numerical rating, excluding veterans' preference. In this scale, the lowest possible score is 1 and the highest possible score is 100."
|
|
|
Post by owl on Mar 20, 2014 9:08:36 GMT -5
I think the key to OPM's email is: "The points earned on each competency were added together to determine the total competency scores for each applicant, and the total competency scores of all applicants were put on a 1 to 100 scale to establish each applicant's numerical rating...."
So I think every applicant was, ultimately, given a numerical rating. They just don't tell you your number unless you made all four cuts/minimums (phase 1, phase 2, WD, SI) and thus made the register. I think a person who scored a 1.00 would be someone who had the requisite 7 years of legal experience, but had no litigation experience (rated a 1 out of 5) and then got a complete zero on both the writing sample and SJT. I doubt that actually happened, but anyway the point I think OPM is making is that everyone got a raw score, and then they just converted those scores to a 1-100 scale. (And the reason for conversion to a 1-100 scale is so that when vet points are added those 5 or 10 points have the weight the law means them to have.) There is no "1" on the register. Hence, the following sentence from the email: "Candidates placed on the register...have been drawn from the high-scoring group of those who took the online assessments, so their ratings signify only how they placed relative to others on the same register."
|
|
|
Post by sandiferhands (old) on Mar 20, 2014 9:32:55 GMT -5
The email was much like the logic based test. If this is true, you would be well-advised to ignore all of my postings about the email.
|
|
|
Post by jessejames on Mar 20, 2014 9:42:32 GMT -5
From a psychological per
|
|
|
Post by jessejames on Mar 20, 2014 9:56:33 GMT -5
From a psychological perspective, a score below 70 is seen as "failing" so no one wants to admit failure. However, it appears that the scores are actually "rankings" that are evenly divided between 1 and 100. If 50 or below, then this just means you are ranked in the lower half, not a failure.
However, I doubt if many, if anyone, would want to publicly disclose they are ranked in the lower half. So we should not expect to get accurate poll data.
|
|
|
Post by Gaidin on Mar 20, 2014 10:20:10 GMT -5
There is almost no way based upon everything else we know that these are rankings where there is an even distribution at each number. That would mean that if all 100% of the people who scored a 73 were reporting the register size would be around 2,400. That is simply not possible given the other information we possess.
I also find the idea that we are spread evenly but that the people on this board make a bell curve improbable. Even if you accept that we are high scorers you can't really expect that we are also a bell curve of high scorers when no bell curve should exist.
I believe that the scores however derived have been converted into a 100 scale. Because there are 13 competencies there will be some need for percentile points to complete the conversion. Once converted our scores created a pretty neat but not exact bell curve. The polling data is not wholly reliable but its pretty darn good evidence about how the scores are distributed.
|
|
|
Post by luckylady2 on Mar 20, 2014 10:23:25 GMT -5
I agree with owl. My read is that everyone who applied (even if they didn't make it past submitting an application) was given a numerical rating in the end. The low scorers were culled at each phase, but it sounds like they were still in the mix in terms of ultimate percentiles. This gives OPM the ability to say that everyone on the register is in, say, the upper 50% of all applicants, which I suppose is some comfort to OPM and the agencies concerning qualifications/abilities. If this is true, then it would make sense that among the upper half of all applicants, the ratings are doled out in a way that the population of folks that make the register form a bell curve with its peak hitting around 75.
Just speculation, of course, but it makes sense. Either way the email does seem to clarify that the NOR is the result of each each competency being given equal weight, and not the mere addition of "right answer points." IE you could knock the writing aspect out of the park but writing won't contribute more than 1/13 of the score - it won't fill in points lacking on another competency.
|
|