[–] seth_storm 0 points 1 point 1 point (+1|-0) ago (edited ago)
As in revealing individuals an undesirable category (e.g. longterm unemployed) that can't easily be found out otherwise?
[–] ProgHog231 0 points 2 points 2 points (+2|-0) ago
As someone who has worked in this area for a couple of decades, I'm not sure that this is really news. What is probably different is the increased awareness by the press and the public about the development of such technologies and the wider use of analytics. As the article suggests, these problems are not technical, but have more to do with design, awareness of the law and best practices, and sufficient business oversight and review.
For example, historically a real problem in mortgage lending was a practice called redlining, which is illegally using racial factors in lending. By far the worst practices involved biased (and bigoted) human decisions and in the 90s there was a pretty big crackdown. One response from lenders was a greater reliance on statistical models that would identify potential customers for 'invitation to apply' programs, and I worked with a number of financial institutions on developing these. Input variables would typically include a range of household demographic attributes, and it was fairly straightforward to eliminate obvious sources of bias, such as race and ethnicity. As /u/Psycoth points out, though, using other ostensibly neutral factors (household income, education levels, etc.) still could cause problems, since neighborhoods with higher minority populations could index lower on many of these factors. Doing more analysis on the back end of model development was required to make sure that the resulting selections did not wind up once again redlining neighborhoods.
TL;DR: not a new phenomenon, but I agree that companies need to be aware of unintended consequences and the law.
[–] Psycoth 0 points 3 points 3 points (+3|-0) ago
I really don't like the article's use of the the term "algorithmic bias". It makes it seem that algorithms that process data are intentionally discriminatory. It seems to me that these programs are simply pointing out the larger social issues that we have developed as a society.
E.g. With the Princeton Review issue:
The PR wants to do one thing. Maximize profits. They're a business that's what they do. In order to do this they create an algorithm that adjusts price based on how much people are willing to pay for their product. I don't have access as to how they arrived at the base price, but I'm guessing they used A/B testing to arrive at the conclusion that they can charge more or less based on zip code. Zip codes being a mandatory field that people fill out when they order their product, and location based demographic information is a common metric in analyzing data.
Once the data has been gathered and the algorithm has been put in place, what do we find? If you take the time to scrutinize the data with respect to race, you can come to the conclusion that asians are getting charged more. However, if you take the time to scrutinize the data and determine exactly what is going on you might find that:
In this case, there's no racial bias imposed by the algorithm. It's just pointed out that some of our stereotypes about asians may not be that far from the truth. It seems that asking to remove this perceived bias is more shooting the messenger than correcting for discrimination. It seems that this type of market analytics is pointing out that the US (Maybe other countries) suffer from cultural fragmentation and social inequalities. These are the kinds of issues that need to be addressed on a different level. Forcing people to change these algorithms because we don't like what they may be exposing about our culture is a poor reaction.
This, of course, assumes that someone at PR didn't just walk in one day and say "Hey I bet we can charge asians more money to use our product"
[–] babimbang [S] 0 points 4 points 4 points (+4|-0) ago
There still exists “a large legal difference between whether there is explicit legal discrimination or implicit discrimination,” said Friedler, the computer science researcher. “My opinion is that, because more decisions are being made by algorithms, that these distinctions are being blurred.”
Implicit racism seems to suggest disparity in results among groups without evidence of actual discriminatory practices -- so is the disparity due to racism or something else?
[–] Psycoth 0 points 2 points 2 points (+2|-0) ago
I'm going to guess that, for most companies, this disparity is a result of social, economic, and cultural issues that are outside of a business' control. There may be some people inside a business that have the ability to enact discriminatory practices (and I'm sure this does happen), but it would be something that would be difficult to prove.
Lets look at the Duke Power Company example they used in the article: DPC wants to maximize its revenue. In order to do that, someone decides that the hiring process should have restrictions. Businesses do this all the time so that they can easily sort through the pile of applications that they get for new job openings. Everyone agrees that requiring a high school education and getting a specific score on a test is a good metric. Again, this is something other businesses do this all the time and there is little controversy over it.
Once this practice is in place, everything would seem perfectly normal until you take the time to scrutinize the results with respect to the hiring of minorities ("People of color" in this case.). As it turns out, black people in that hiring area were more likely to drop out of high school, and because of this, have a lower level of education. This was not the intention of the hiring practice, but it was the result of much larger social and economic issues.
This is all guess work of course. It may be that someone at DPC came in one day and said "What can I do to keep the niggers outa here". Although, even if this was intentional, it still points out that social, economic, and cultural issues, which are outside the control of any individual business' control, are being leveraged to create discriminatory hiring practices.
[–] Thememeking ago
As much as I hate that the government is swiping up data left and right from random civilians I know I can sleep easy knowing that they don't have a fucking idea on what to do with it. Enjoy the 80 pages of me watching game grumps dummy.