You are viewing a single comment's thread.

view the rest of the comments →

0
15

[–] bayesianqueer 0 points 15 points (+15|-0) ago 

The problem with this is that you can only adjust so much. If you look at the community hospital where I practice in CA, it looks better with regard to complication rates than Brigham and Women's Hospital and the Mayo Clinic. Now I'm not saying I work at a bad hospital, but seriously. We aren't that good. However Mayo and the Brigham take patients for surgeries that the surgeons at my hospital wouldn't touch with a ten foot pole. Moreover the best orthopedist at my hospital scores more poorly than the one who refuses to take call and cherry picks his patients.

The problem is that measuring quality is exceedingly hard. There are some measures that seem objective but which can still be gamed and will often make surgeons who do the right thing (i.e. take call and get the luck of the draw for his/her patients versus only do elective cases that they can pre-select for patients less likely to have complications.) Moreover the difference in some of their categories that makes one low versus high can be as little as 1%. That amount of difference is easily within the fudge room of patient selection.

Unfortunately I don't have a solution to assess physicians and neither has any that I have come by (and I chair my hospital's quality committee, so I am aware of ALL the measures used by CMS and other agencies).

0
1

[–] Digik 0 points 1 point (+1|-0) ago 

I agree, but think we still have to try to define something. For one, to measure, could you separate out the primary reason for a surgery from common complications (e.g. post-surgery infection rates...). Or group performance ratings between common levels of difficulties... physicians into front line care and second and third level experts (those physicians that get referred to from other doctors when they hesitate to take on a problem...)