Masthash

#Bias

Estelle Platini
2 hours ago

‘Mass assassination factory’

“We prepare the targets automatically and work according to a checklist,” a source who previously worked in the Target Division told +972/Local Call. “It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.”

A separate source told the publication that the Habsora AI had allowed the IDF to run a “mass assassination factory” in which the “emphasis is on quantity and not on quality”. A human eye, they said, “will go over the targets before each attack, but it need not spend a lot of time on them”.

Report: https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

#militaryAI #Habsora #AIRisks #IsraelDefenseForces #IDF #army #productivity #tech #techCulture #ethics @ethics #bias #AI #aiEthics #generativeAI @ai #StochasticParrots @dataGovernance @data #Gaza #war #israelGaza #israelGazaWar #israelPalestineWar #bombardment #bombs #Habsora #ethnicCleansing #warCrimes

Alex Plaum
4 hours ago

"Wenn Aleph Alpha durch seine ungefilterten Nutzungsmöglichkeiten zur Debatte über die tief sitzenden Vorurteile in der Sprache im Internet aufmerksam machen wollte, ist das der Firma wohl gelungen." 😬
https://interaktiv.tagesspiegel.de/lab/aleph-alpha-ki-aus-deutschland-biases-vorurteile/

#KI #LLM #Bias

Estelle Platini
6 hours ago

In 2019, the Israeli army created a special unit to create targets with the help of generative AI. Its objective: volume, volume, volume.
The effects on civilians (harm, suffering, death) are not a priority: https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

#lawful #compliance #governance #anthropology #tech #techCulture #engineering #engineers #ethics @ethics #sociology @sociology #bias #AI #AITech #aiEthics #generativeAI #chatBots @ai @psychology @socialpsych #StochasticParrots @dataGovernance @data

“This is a machine that, with the help of AI, processes a lot of data better and faster than any human, and translates it into targets for attack,” Kochavi went on. “The result was that in Operation Guardian of the Walls [in 2021], from the moment this machine was activated, it generated 100 new targets every day. You see, in the past there were times in Gaza when we would create 50 targets per year. And here the machine produced 100 targets in one day.”

“We prepare the targets automatically and work according to a checklist,” one of the sources who worked in the new Targets Administrative Division told +972 and Local Call. “It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.”
Estelle Platini
6 hours ago

In 2019, the Israeli army created a special unit to create targets with the help of generative AI. Its objective: volume, volume, volume.
The effects on civilians (harm, suffering, death) are not a priority: https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

#lawful #compliance #governance #anthropology #tech #techCulture #engineering #engineers #ethics @ethics #sociology @sociology
#bias #AI #AITech #aiEthics #generativeAI #chatBots @ai @psychology @socialpsych #StochasticParrots @dataGovernance @data

“This is a machine that, with the help of AI, processes a lot of data better and faster than any human, and translates it into targets for attack,” Kochavi went on. “The result was that in Operation Guardian of the Walls [in 2021], from the moment this machine was activated, it generated 100 new targets every day. You see, in the past there were times in Gaza when we would create 50 targets per year. And here the machine produced 100 targets in one day.”

“We prepare the targets automatically and work according to a checklist,” one of the sources who worked in the new Targets Administrative Division told +972 and Local Call. “It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.”
Estelle Platini
6 hours ago

In 2019, the Israeli army created a special unit to create targets with the help of generative AI. Its objective: volume, volume, volume.
The effects on civilians (harm, suffering, death) are not a priority: https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

@dataGovernance @data #lawful #compliance #governance #anthropology #tech #techCulture #engineering #engineers #ethics @ethics #sociology @sociology
#bias #AI #AITech #aiEthics #generativeAI #chatBots @ai @psychology @socialpsych #StochasticParrots

“This is a machine that, with the help of AI, processes a lot of data better and faster than any human, and translates it into targets for attack,” Kochavi went on. “The result was that in Operation Guardian of the Walls [in 2021], from the moment this machine was activated, it generated 100 new targets every day. You see, in the past there were times in Gaza when we would create 50 targets per year. And here the machine produced 100 targets in one day.”

“We prepare the targets automatically and work according to a checklist,” one of the sources who worked in the new Targets Administrative Division told +972 and Local Call. “It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.”
Ricardo
1 day ago

The Internet is Worse Than Ever – Now What? | Kurzgesagt – In a Nutshell

https://www.youtube.com/watch?v=fuFlMtZmvY0

#internet #bias #SocialNetworks #community #psychology #sociology

Ben Waber
3 days ago

First was a fabulous talk by Oriana Bandiera on inequality, the misallocation of talent, and economic development at the LSE. Bandiera examines how in different settings highly educated people can be more likely to be in lower paying jobs, and the effects that economic class and gender have in ossifying these inequalities. She also presents a compelling experiment that can catapult people out of this trap. Highly recommend https://www.youtube.com/watch?v=T13d1yaMfng (2/11) #economics #inequality #bias

Adrian Segar
3 days ago

How bad smells, hand sanitizer, and Israeli judges affect your evaluation of an event. Yes, our bodies guide our judgments!

https://www.conferencesthatwork.com/index.php/event-design/2012/10/how-bad-smells-hand-sanitizer-and-israeli-judges-affect-your-evaluation-of-an-event

#meetings #evaluations #bias #psychology

Evaluation of an event—three photographs of a pile of trash bags outside a gate, a wall-mounted hand sanitizer dispenser, and a hand rapping a judges gavel.
Ben Waber
4 days ago

Next was a thought-provoking talk by Isaac Kohane on the use and abuse of ethnic and race labeling in machine learning at Harvard CMSA https://www.youtube.com/watch?v=b2ibX026mfI (5/9) #MachineLearning #bias

Philosophical Psychology
4 days ago

We are delighted to announce that the winning paper for the Lex Academic® Essay Prize for Understanding #LinguisticDiscrimination is “Linguistic #justice in academic #philosophy” by Peter Finocchiaro & Timothy Perrine. The paper has been published online free access https://doi.org/10.1080/09515089.2023.2284243 , and will appear in print in a special issue dedicated to Understanding #Bias, guest edited by Katherine Puddifoot. @philosophy

The many languages of academia represented as speech bubble with Earth in the background.
Jazzynupe.Xbox
5 days ago

Via #Destin talking deeper about the #XboxTax
Watch "Let's Talk About the Xbox Tax Evidence" on YouTube
https://www.youtube.com/live/khIFgTeUq6M?si=Hdu6mp2kDiLxluD7
#Xbox #FairReviews #BiasInMedia #Journalism #Bias #ConsoleWar #IGN

Greg Dance
6 days ago

Dr Russell Jackson, snr lecturer in comms & PR, Sheffield Hallam

Who’s interest?

“The evidence suggests that either the desire to drive audience figures & engagement is now overly dependent upon inviting guests who specialise in generating polarising controversy, rather than engaging in nuanced respectful debate, or the people responsible for deciding which panellists are invited onto QT have a right-wing bias”.

https://yorkshirebylines.co.uk/politics/an-open-letter-to-the-bbc/

#bbcqt #bias

EngelMichelmann 👼
1 week ago

@AmiW
Am frühen Morgen hat die Botschaft etwa gebraucht, um sich durch die Synapsen zu winden. Großartig!
#bias

AI6YR
1 week ago

My strategy to learn more about party planning (from the "Idiot's Guide" series) is more like running into an author with some serious bias. #books #racism #bias #hate

Suggestion in a party planning book: "After supper, gather the gang and put together a barbershop quartet or Sweet Adeline to sing classic songs of the Confederacy: "When Johnny Comes Marching Home Again, "Dixie", "Mammy" or "Swanee River".

@mikker

Actually we never did.

They taught emselves.
Like #toddlers do.

But with all info on earth. Infinte #resources and all #bias humans are subject to.

The last part is the problematic one

Weizenbaum-Institut
1 week ago

How can #ethics be effectively implemented when working with large datasets & designing data representations?

New paper "Making Arguments with #Data" by Selena Savic & Yann Patrick Martins:

➡️ https://doi.org/10.34669/WI.WJDS/3.2.4

#digitalsovereignty #openaccess #openresearch #wjds #feminism #ai #bigdata #bias @jazoza @heimstaedt @bherlo @ecl @claraigk

ElizabethAmberLove
1 week ago

Stop firing humans to interact with employees and potential employees! There are myriad reasons why AI does not bode well for accuracy in the real world.
From Scientific American's summary of two LLMs:
While ChatGPT deployed nouns such as “expert” and “integrity” for men, it was more likely to call women a “beauty” or “delight.” Alpaca had similar problems: men were “listeners” and “thinkers,” while women had “grace” and “beauty.” #genderbias #bias #sexism
https://www.scientificamerican.com/article/chatgpt-replicates-gender-bias-in-recommendation-letters/

Petra van Cronenburg
1 week ago

@tante Especially the European media often reported about the trained bias. An older interview in the Guardian with AI specialist Meredith Broussard: "Racism, sexism and ableism are systemic problems". She talks also about Sam Altman's extremely elitist and biased plans for an AI built health system: https://www.theguardian.com/technology/2023/mar/26/artificial-intelligence-meredith-broussard-more-than-a-glitch-racism-sexism-ableism

#AI #LLM #bias

, intelligence was "best characterized as the things that highly educated *male* scientists found challenging", such as chess, symbolic integration, proving mathematical theorems and solving complicated word algebra problems. "The things that children of four or five years could do effortlessly, such as visually distinguishing between a coffee cup and a chair, or walking around on two legs, or finding their way from their bedroom to the living room were not thought of as activities requiring intelligence."
#ai #bias

Carl
1 week ago

@echo_pbreyer Ein Schelm, wer glaubt dass die Intensität der Hautfarbe direkt mit der Wahrscheinlichkeit der Einstufung als "überprüfenswert" durch die Software korreliert ist!
#bias #automaticdiscrimination

Ben Waber
2 weeks ago

Next was an interesting talk by Yahya Ezzeldin on privacy and fairness in collaborative AI at the Simons Institute https://www.youtube.com/watch?v=J0gVpUf9CVQ (6/8) #privacy #AI #bias

Ben Waber
2 weeks ago

Next was a fantastic talk by @saxon on analyzing cultural knowledge, biases, and multilingual performance in text-to-image models. He presents compelling ways to characterize lower bound performance and clearly illustrates the tension between providing consistent results across languages while preserving cultural differences. Saxon even shows how to intervene in the stable diffusion pipeline to switch cultural contexts. Highly recommend https://www.youtube.com/watch?v=nlu57ZSKbi0 (4/8) #AI #bias #GenerativeAI

Ben Waber
2 weeks ago

First was an excellent talk by Flavio Calmon on the limits of group fairness and predictive multiplicity. Calmon points out that most classification algorithms don't only give one output for a particular input, and small input or seed perturbations can lead to different outcomes. Beyond that, even different models can converge to similar unfair classifications, and Calmon develops a way to characterize this: The "Rashomon Set." Highly recommend https://www.youtube.com/watch?v=VC6LfzgXZvY (2/8) #AI #bias

Ben Waber
2 weeks ago

Next was a great talk by Sanghamitra Dutta on fair and explainable machine learning for high-stakes applications at the Simons Institute. Dutta introduces a method for generating counterfactuals when an algorithm makes a classification decision, helping people who, say, don't qualify for a loan understand what they'd have to do to be able to qualify while explicitly designing for fairness. Highly recommend https://www.youtube.com/watch?v=mNqeIe85pf8 (6/10) #AI #bias #AIEthics

T. T. Perry
2 weeks ago

What are the best things you know of written about the view from nowhere?

#AskFedi #journalism #curious #bias

A study of cultural evolution based on a massive dataset of 3.5 million #chess games from 1971 to 2019. As expected, some moves evolved under negative frequency dependence (anticonformity, advantage of rare traits). But some others evolved on pure prestige bias when players mimicked elite players.

#Science #evolution #culture #conformity #prestige #strategy #bias #CognitiveBias

📄 Lappo et al (2023) Cultural transmission of move choice in chess. Proc R Soc B 290:20231634 http://dx.doi.org/10.1098/rspb.2023.1634

Chess diagram illustrating the Najdorf Sicilian opening and a plot showing its move frequencies over time from 1971 to 2019.
Ben Waber
2 weeks ago

Next was an amazing talk by @mariadearteaga on residual harms of fairness-aware algorithms at the Simons Institute. Algorithms can learn, replicate, and ossify societal biases, and this work aims to quantify the risk of inferred norms and illustrate the limits of post-processing corrections. Highly recommend https://www.youtube.com/watch?v=g0BSE8qEub8 (7/10) #bias #AI #AIEthics

Dimitri Dhuyvetter
2 weeks ago

Interestingly Council on Foreign Relations which the first article was on supposedly has a left wing bias: https://www.allsides.com/news-source/council-foreign-relations although they’re also “least biased” according to https://mediabiasfactcheck.com/council-on-foreign-relations/ #CouncilOnForeignRelations #bias #mediabias

Ben Waber
2 weeks ago

Next was a sweeping talk by Tessa Charlesworth on long-term change in attitudes and stereotypes using the IAT, surveys, and text analysis at NICO https://www.youtube.com/watch?v=EHEiLTW8BS4 (9/13) #bias #psychology

Sohan Dsouza
2 weeks ago
Frederik Borgesius
2 weeks ago

If, like most people, you don’t speak Dutch: here is a blog post in English in which we make similar points:

Using sensitive data to prevent AI-driven discrimination: Does the EU GDPR need a new exception?

https://iapp.org/news/a/using-sensitive-data-to-prevent-ai-discrimination-does-the-eu-gdpr-need-a-new-exception/
#AI #FAccT #bias #discrimination #tech #gdpr #privacy #data #dataprotection #datascience #machinelearning #aiact #law

Frederik Borgesius
2 weeks ago

Nieuwe blog post door Marvin van Bekkum & mij:

Mag je gevoelige gegevens verwerken om discriminatie door AI tegen te gaan, en waarom (niet)?

https://ibestuur.nl/artikel/gevoelige-gegevens-verwerken-om-discriminatie-door-ai-tegen-te-gaan/

#AI #FAccT #bias #discrimination #tech #gdpr #privacy #data #dataprotection #datascience #machinelearning #aiact #avg

Andrew
2 weeks ago

Back from three days at the #apha @publichealth @aphagovuk #publichealth meeting in #Atlanta. A chance ride on the elevator lead to a conversation with a more senior colleague, completely changing my opinion about looking for social determinants of health and #bias in predicting outcomes with #machinelearning Thanks to Oxford University press for not wanting to take all their books home and putting a few off on me. I realize we need more #health #communications too

Flipboard News Desk
2 weeks ago

Biden administration bolsters effort to battle antisemitism and Islamophobia in U.S. schools.

NBC News reports the effort is centered on "all school levels, from elementary to college."

https://flip.it/R8FDRm

#Antisemitism #Islamophobia #Bias #Education #Biden #Israel #Hamas

EU_ScienceHub
2 weeks ago

RT by @EU_ScienceHub: Important messages from @AnnaLuehrmann at the #ESPAS 2023 opening panel:

🔹we have to ensure that #foresight translates across into policy action
🔹we need to be more #agile in our responses
🔹foresight can help overcome the risk of #present thinking #bias

🐦🔗: https://nitter.cz/stephen_quest/status/1724349292167446805#m

[2023-11-14 08:51 UTC]

DennisL
3 weeks ago

Disability Bias in Artificial Intelligence - #id24 2023, with Chris Land https://www.youtube.com/watch?v=n2z4V5gltJA #AI #disability #bias

NYT covering Hillary Clinton’s “deplorables” remark vs NYT covering Donald Trump’s “vermin” remark.

#politics #trump #election #news #media #theNewYorkTimes #newspaper #hillaryClinton #mediaBias #bias

Screenshot of The New York Times

Hillary Clinton Calls Many Trump Backers ‘Deplorables,’ and G.O.P. Pounces
Screenshot of The New York Times

Trump Takes Veterans Day Speech in a Very Different Direction
Ben Waber
3 weeks ago

Next was a fabulous talk by @morganklauss on how we teach computers to see identify at the University of Michigan. Scheuerman audits gender classification across a number of commercial models, demonstrating significant bias and erasure of non-cis people. Highly recommend https://www.youtube.com/watch?v=aeETasFrnMs (6/11) #gender #AIEthics #bias

RationalWiki
3 weeks ago

@AlexanderKingsbury @zdl

Various political compasses have built in biases, and the one that you seem to be citing, the one with the libertarian-authoritarian axis, often has loaded questions associated with it.

#RationalWiki #politicalcompass #bias #libertarianism #authoritarianism
https://rationalwiki.org/wiki/Political_Compass

petersuber
4 weeks ago

Update. New study using #ChatGPT to assess referee reports: "Female first authors received less polite reviews than their male peers… In addition, published papers with a female senior author received more favorable reviews than papers with a male senior author."
https://elifesciences.org/articles/90230

#AI #Bias #Gender #PeerReview

Rafagas Links
1 month ago

STNAMES LAB (Street names Lab) is a research group focused on the quantitative analysis of urban toponyms, in this case the gender bias #bias

https://stnameslab.com/ciudades-con-genero/#evo

Dr. Steve Thompson
1 month ago

Looks like Republican right wing media outlets, including Murdoch's holdings, aren't feeding their loyal followers this story right now. More deliberate censoring of anti-Trump national news from GOP party members.

What indictments...?!?

"Special counsel accuses Trump legal team of trying to manipulate courts with requests to delay trials"

:https://ground.news/article/special-counsel-says-trump-is-trying-to-manipulate-courts

#GOP #Trump #indictments #Cannon #RICO #journalism #bias #ensorship

-0--1-
1 month ago

@feoh @Sentry23 @mohammadrasoul I think what you are experiencing are the three sources of #Embeded #Structural #Bias that is back propagation amplified (1) Biases in the Training Set, (2) Biases in the Trainers and Coders, (3) Biases in the Individuals who monitor Bard's communication stream in addition to the #Hallucinations that are common.

PattyHankins
1 month ago

Article from the Washington Post about how AI generated images are reinforcing biases and stereotypes by what they portray is images.

https://wapo.st/3u429lR

Link is a gift link so you don’t need a Washington Post account to read the article

#AI #Stereotypes #Bias

Upol Ehsan
1 month ago

🧵 [1/n]
Super interesting study on an instance of the Algorithmic Imprint (https://twitter.com/UpolEhsan/status/1537112310505824256)-- people might retain biases from AI systems even when the AI system is no longer there. The spirit of the argument is well taken. What are some of the caveats you should pay attention to when trying to transfer insights from studies like these to real-world settings?

#AI #mastodon #responsibleAI #ML #bias

About 90% of those sentenced to prison for a drug offense in Illinois are african-american. White drug users and dealers are rarely arrested, and when they are, they are treated more favorably at every stage of the criminal justice process, including plea-bargaining and sentencing. Whites are consistently more likely to avoid prison and felony charges, even when they are repeatedly caught with drugs.

–The New Jim Crow

#Books #Quotes #Drugs #Injustice #Illinois #Prison #MassIncarceration #Bias

John Leonard
1 month ago

The social engineering of the self: How AI chatbots manipulate our thinking

My interview with red teamer Rumman Chowdhury (free registration)

https://www.computing.co.uk/interview/4139201/social-engineering-self-ai-chatbots-manipulate-thinking

#ai #technews #infosec #redteam #chatbot #bias

Wladimir Mufty
1 month ago

Technology is not neutral.

Not yesterday, not today and not tomorrow.

Who created it? Why was it created? Who can use it? Who is benefiting? Who is getting harmed? What is the potential? Who was included in decisions on all the above?!

#palantir #ai #bias

AndiMann
1 month ago

btw, I have new analysis coming soon on this via @Sageable (is me!!), covering @Hashicorp, #Terraform, #OpenTofu, ++ .

I was hosted at #HashiConf, but try to break my own #bias & base opinions mainly on feedback from end users & customers.

Stay tuned, coming soon!!!

Für Genderleicht.de habe ich einen Essay verfasst zum Thema "Unerwünschte Sexyness: Was KI-Bilder-Apps mit Frauen machen"

Es geht um #KI und #Bias
https://www.genderleicht.de/unerwuenschte-sexyness-was-ki-bilder-apps-mit-frauen-machen/

ICYMI @ieeevis "Average Estimates in Line Graphs are Biased Towards Areas of Higher Variability" you can view a recording of the talk here: https://youtu.be/XTA-DNCITtA and read more here: https://vis.khoury.northeastern.edu/pubs/Moritz2023AverageEstimatesLine/

#DataVis #Visualization #Bias #Estimation #statistics #scicomm #communication #psychology

Image of three time series charts. The first encodes data with points equally spaced along the x-axis, the second uses a line, and the third uses points equally spaced along the arc of the line. On each chart, there is a grey overlaid line indicating the true average of the data and a red line showing participants' estimated average.
Demonstration of the bias toward variability for three mark types showing the same data. The red line shows the mean estimated averages across all participants in our second experiment. The line chart shows a bias of the estimated average toward higher variability in the higher y-values.
Demonstration of the bias toward variability for three mark types showing the same data. The red line shows the mean estimated averages across all participants in our second experiment. The bias is smallest when the data is shown as points equally spaced along the x-axis.
Demonstration of the bias toward variability for three mark types showing the same data. The red line shows the mean estimated averages across all participants in our second experiment.  The bias in line charts is in the same direction as the bias of estimates of points sampled at equal intervals along the arc of the line.

For example, a review of juvenile sentencing reports found that prosecutors routinely described black and white youth differently: blacks committed crimes because of internal personality flaws, such as disrespect, whites did so because of external conditions, such as family conflict.

–The New Jim Crow

#Racism #Law #Prosecutors #Youth #Crime #Bias

Ohio Rob 🐶📻
1 month ago

Here we have CNN exaggerating, literally by MILLIONS of people — the scope, that is, how many Palestinian civilians — could be displaced by Israeli military action in Gaza.

The entire population of Gaza is 2 million.
If a million people have already fled, as she states, then there are not *millionS* more who will become displaced. 🚨

At least Reuters can do the math. Stay safe in this information space, folks.

#israel #gaza #terrorists #news #bias

Excerpt from a CNN story which inaccurately portrays the scope of Palestinian civilians displaced by Israeli military action in Gaza. The author suggested that after 1 million of Palestinians have already fled, that millions more could be displaced. But the population of Gaza is only 2 million.
Screen capture of a Reuters headline from eight days ago, which more accurately depicts the number of Palestinians displaced by Israeli military activity in Gaza. It notes that Israel had warned half of Gaza’s population, which is accurate.

We’re excited for Wednesday morning’s @ieeevis #Trust & #Bias full paper session (10:45-12, Rm 109). Best paper honorable mention “Average Estimates in Line Graphs are Biased Towards Areas of Higher Variability” proudly co-authored by @lace : https://arxiv.org/abs/2308.03903 #IEEEVIS

Longreads
1 month ago

“Now we’re giving a voice to machines.”

For Rest of World, Victoria Turk explores bias in AI image generators like #Midjourney: https://restofworld.org/2023/ai-image-stereotypes/

#Longreads #AI #ArtificialIntelligence #Stereotypes #Bias #Culture

Sarah de Rijcke
2 months ago

Does science need heroes or does it need to reform? I elaborate on this in a blog post, based on a talk I gave at the recent Rijksmuseum Boerhaave symposium on Prize Cultures. Idolizing heroes can worsen #bias, #inequality, and #competition in science. Yet, it does require good #leadership to ignite structural change. https://www.leidenmadtrics.nl/articles/does-science-need-heroes

Ben Waber
2 months ago

Next was a great set of talks on entrepreneurship at the Strategic Management Society with Alexis Smith Washington (skin tone bias in microlending), Peter S. Polhill (Romani entrepreneurship), and Taekyun Kim (impact of COVID-19 on entrepreneurship) https://www.youtube.com/watch?v=sd3d-jad3aw (6/9) #startups #bias #economics

Frederik Borgesius
2 months ago

‘De Belastingdienst/Toeslagen heeft zich schuldig gemaakt aan indirecte discriminatie in de kinderopvangtoeslagaffaire. Dit blijkt uit de eerste 3 zaken van zogenoemde ‘toeslagenouders’ die het College voor de Rechten van de Mens heeft onderzocht en getoetst aan de Algemene Wet Gelijke Behandeling. Het oordeel luidt 3 keer: verboden indirect onderscheid op grond van ras.’
https://www.mensenrechten.nl/actueel/nieuws/2023/10/2/toeslagen-oordelen #discrimination #bias #law #politics #netherlands

Frederik Borgesius
2 months ago

Netherlands Institute for Human Rights, the non-discrimination authority: The Tax Authority is guilty of indirect discrimination in the childcare benefits affair. This emerges from the first three cases of parents investigated by the Institute and tested against the General Equal Treatment Act. The verdict is three times: prohibited indirect discrimination on grounds of ethnicity. https://www.mensenrechten.nl/actueel/nieuws/2023/10/2/toeslagen-oordelen #discrimination #bias #law #politics #netherlands

Research Institute
2 months ago

EEOC Settles First-of-Its-Kind AI #Bias in Hiring Lawsuit

"The US Equal Employment Opportunity Commission settled its first-ever #AI #discrimination in hiring lawsuit, reaching an agreement with a tutoring company that allegedly programmed its recruitment software to automatically reject older applicants."

https://news.bloomberglaw.com/daily-labor-report/eeoc-settles-first-of-its-kind-ai-bias-lawsuit-for-365-000

Mike Gifford
2 months ago

Was very interesting to listen to Amy Hupe's System of Harm. Her interview with David Dylan Thomas was quite interesting. I liked the idea that we can't remove bias from our thinking, but we can increase our awareness of it. https://systems-of-harm.fireside.fm/ep1-david-dylan-thomas

#DesignSystem #Bias #CognitiveBias

Theodora Lau
2 months ago

"As larger women age, they incur the effects of years of cumulative wage discrimination. Controlling for other factors, their starting wages are lower. Throughout their working careers, these women receive fewer raises and promotions."

#bias

https://www.economist.com/christmas-specials/2022/12/20/the-economics-of-thinness

Alan Kotok
3 months ago

I.T. systems company Fujitsu Ltd. is providing artificial intelligence model generation and fairness assessment software as open-source projects hosted by the Linux Foundation.

https://sciencebusiness.technewslit.com/?p=45211

#News #Press #Science #Business #ArtificialIntelligence #MachineLearning #Models #Algorithm #OpenSource #Software #Collaboration #Bias #Fairness #Engineering #ComputerScience #Linux #Japan #Europe

CJ Stevens - Metaphysiology
3 months ago

#Myside #bias is me google searching "What is the youngest recommended age for reading the Lord of the Rings to a child?" and stopping my search only when I've found a page that says 4 years old. Even if I have to make that page myself! 🙃

#Tolkien #LOTR #motivatedreasoning

@cognition

Mastodon, I'm bringing this one to you first.

Earlier this week the New York Times claimed to have partially corroborated some of Jamie Reed's allegations against a clinic for transgender youth in St. Louis.

Their story did not back up the assertion that they corroborated Reed's claims. I know that because I looked at every single one of the 69 claims. #trans #transgender #news #journalism #Newstodon #bias

https://www.assignedmedia.org/breaking-news/jamie-reeds-allegations-are-not-even-partially-confirmed

Mike Gifford
3 months ago

Christopher Land's Disability Bias in Artificial Intelligence talk at #ID24 looks great.

Learn about bias in artificial intelligence which can discriminate against marginalized groups. When built accessibly, AI systems can promote fairness and enhance the lives of everyone, in amazing and unprecedented ways.

#accessibility #AI #bias

1pm ET September 21th https://inclusivedesign24.org/2023/schedule/

JW prince of CPH
3 months ago

Sometimes think about the time I heard a Danish right-wing politician flat-out state, un-prompted, in a speech inside f**king parliament, that they were using anti-immigration as a platform for building the influence needed to relieve taxation & regulation on the rich - and not only didn't a single mainstream news channel ever say a single word about it, but neither, to my knowledge, did any other politician, including her opposition...

#politics #press #bias

Ben Waber
3 months ago

First was a great talk by Timothy Pleskac on equity and fairness of single vs. double-blind review at @sfiscience. There's important interrogation here of #academia, KPIs, #bias, and the benefits of some randomness in rewards. Highly recommend https://www.youtube.com/watch?v=E9cy5a8zmYc (2/7)

Your Autistic Life
4 months ago

"OpenAI is using GPT-4 to build an AI-powered content moderation system"

I doubt anything good is going to come out of this. It has already been demonstrated that AI are biased in how they handle cases *if* the dataset they are trained on is biased in the first place.

I can only see moderators where this is implemented shrug and say "I cannot be biased, because the black box told me to it, and the black box is not biased."

#AI #AIIsShit #moderation #AutomatedModeration #bias

https://www.engadget.com/openai-is-using-gpt-4-to-build-an-ai-powered-content-moderation-system-184933519.html

ThePdog
4 months ago

@mpesce Seems like there may be some bias built into the software. Not surprising this would happen, especially if the software had no poc involved.

#Software #bias #poc

Chris Lagasse
4 months ago

Why must the BBC redirect me from their `.co.uk/news` site to bbc.com?

I want #international #news with non-American #bias, not news provided by an international source geared towards an American audience.