#Bias
‘Mass assassination factory’
“We prepare the targets automatically and work according to a checklist,” a source who previously worked in the Target Division told +972/Local Call. “It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.”
A separate source told the publication that the Habsora AI had allowed the IDF to run a “mass assassination factory” in which the “emphasis is on quantity and not on quality”. A human eye, they said, “will go over the targets before each attack, but it need not spend a lot of time on them”.
Report: https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/
#militaryAI #Habsora #AIRisks #IsraelDefenseForces #IDF #army #productivity #tech #techCulture #ethics @ethics #bias #AI #aiEthics #generativeAI @ai #StochasticParrots @dataGovernance @data #Gaza #war #israelGaza #israelGazaWar #israelPalestineWar #bombardment #bombs #Habsora #ethnicCleansing #warCrimes
"Wenn Aleph Alpha durch seine ungefilterten Nutzungsmöglichkeiten zur Debatte über die tief sitzenden Vorurteile in der Sprache im Internet aufmerksam machen wollte, ist das der Firma wohl gelungen." 😬
https://interaktiv.tagesspiegel.de/lab/aleph-alpha-ki-aus-deutschland-biases-vorurteile/
In 2019, the Israeli army created a special unit to create targets with the help of generative AI. Its objective: volume, volume, volume.
The effects on civilians (harm, suffering, death) are not a priority: https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/
#lawful #compliance #governance #anthropology #tech #techCulture #engineering #engineers #ethics @ethics #sociology @sociology #bias #AI #AITech #aiEthics #generativeAI #chatBots @ai @psychology @socialpsych #StochasticParrots @dataGovernance @data
![“This is a machine that, with the help of AI, processes a lot of data better and faster than any human, and translates it into targets for attack,” Kochavi went on. “The result was that in Operation Guardian of the Walls [in 2021], from the moment this machine was activated, it generated 100 new targets every day. You see, in the past there were times in Gaza when we would create 50 targets per year. And here the machine produced 100 targets in one day.”
“We prepare the targets automatically and work according to a checklist,” one of the sources who worked in the new Targets Administrative Division told +972 and Local Call. “It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.”](https://assets.toot.cafe/cache/media_attachments/files/111/505/348/136/664/824/small/e048e0ac25386bed.png)
In 2019, the Israeli army created a special unit to create targets with the help of generative AI. Its objective: volume, volume, volume.
The effects on civilians (harm, suffering, death) are not a priority: https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/
#lawful #compliance #governance #anthropology #tech #techCulture #engineering #engineers #ethics @ethics #sociology @sociology
#bias #AI #AITech #aiEthics #generativeAI #chatBots @ai @psychology @socialpsych #StochasticParrots @dataGovernance @data
![“This is a machine that, with the help of AI, processes a lot of data better and faster than any human, and translates it into targets for attack,” Kochavi went on. “The result was that in Operation Guardian of the Walls [in 2021], from the moment this machine was activated, it generated 100 new targets every day. You see, in the past there were times in Gaza when we would create 50 targets per year. And here the machine produced 100 targets in one day.”
“We prepare the targets automatically and work according to a checklist,” one of the sources who worked in the new Targets Administrative Division told +972 and Local Call. “It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.”](https://assets.toot.cafe/cache/media_attachments/files/111/505/342/014/369/389/small/76c753ada473ec5c.png)
In 2019, the Israeli army created a special unit to create targets with the help of generative AI. Its objective: volume, volume, volume.
The effects on civilians (harm, suffering, death) are not a priority: https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/
@dataGovernance @data #lawful #compliance #governance #anthropology #tech #techCulture #engineering #engineers #ethics @ethics #sociology @sociology
#bias #AI #AITech #aiEthics #generativeAI #chatBots @ai @psychology @socialpsych #StochasticParrots
![“This is a machine that, with the help of AI, processes a lot of data better and faster than any human, and translates it into targets for attack,” Kochavi went on. “The result was that in Operation Guardian of the Walls [in 2021], from the moment this machine was activated, it generated 100 new targets every day. You see, in the past there were times in Gaza when we would create 50 targets per year. And here the machine produced 100 targets in one day.”
“We prepare the targets automatically and work according to a checklist,” one of the sources who worked in the new Targets Administrative Division told +972 and Local Call. “It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.”](https://assets.toot.cafe/cache/media_attachments/files/111/505/337/600/410/295/small/5e58ca87bc054a5d.png)
The Internet is Worse Than Ever – Now What? | Kurzgesagt – In a Nutshell
https://www.youtube.com/watch?v=fuFlMtZmvY0
#internet #bias #SocialNetworks #community #psychology #sociology
Memory of a Braid
Winner of the 2023 Ross Writing Contest for Legal Short Fiction
#humanrights #discrimination #diversity #inclusion #bias #law #legal #lawfedi #labor #employment
First was a fabulous talk by Oriana Bandiera on inequality, the misallocation of talent, and economic development at the LSE. Bandiera examines how in different settings highly educated people can be more likely to be in lower paying jobs, and the effects that economic class and gender have in ossifying these inequalities. She also presents a compelling experiment that can catapult people out of this trap. Highly recommend https://www.youtube.com/watch?v=T13d1yaMfng (2/11) #economics #inequality #bias
Racism and Technology Center, newsletter
https://racismandtechnology.center/tag/newsletter-38/
#tech #ai #bias #discrimination #racism #facct #law #politics
How bad smells, hand sanitizer, and Israeli judges affect your evaluation of an event. Yes, our bodies guide our judgments!

Next was a thought-provoking talk by Isaac Kohane on the use and abuse of ethnic and race labeling in machine learning at Harvard CMSA https://www.youtube.com/watch?v=b2ibX026mfI (5/9) #MachineLearning #bias
We are delighted to announce that the winning paper for the Lex Academic® Essay Prize for Understanding #LinguisticDiscrimination is “Linguistic #justice in academic #philosophy” by Peter Finocchiaro & Timothy Perrine. The paper has been published online free access https://doi.org/10.1080/09515089.2023.2284243 , and will appear in print in a special issue dedicated to Understanding #Bias, guest edited by Katherine Puddifoot. @philosophy

Via #Destin talking deeper about the #XboxTax
Watch "Let's Talk About the Xbox Tax Evidence" on YouTube
https://www.youtube.com/live/khIFgTeUq6M?si=Hdu6mp2kDiLxluD7
#Xbox #FairReviews #BiasInMedia #Journalism #Bias #ConsoleWar #IGN
Dr Russell Jackson, snr lecturer in comms & PR, Sheffield Hallam
Who’s interest?
“The evidence suggests that either the desire to drive audience figures & engagement is now overly dependent upon inviting guests who specialise in generating polarising controversy, rather than engaging in nuanced respectful debate, or the people responsible for deciding which panellists are invited onto QT have a right-wing bias”.
https://yorkshirebylines.co.uk/politics/an-open-letter-to-the-bbc/

Actually we never did.
They taught emselves.
Like #toddlers do.
But with all info on earth. Infinte #resources and all #bias humans are subject to.
The last part is the problematic one
How can #ethics be effectively implemented when working with large datasets & designing data representations?
New paper "Making Arguments with #Data" by Selena Savic & Yann Patrick Martins:
➡️ https://doi.org/10.34669/WI.WJDS/3.2.4
#digitalsovereignty #openaccess #openresearch #wjds #feminism #ai #bigdata #bias @jazoza @heimstaedt @bherlo @ecl @claraigk


Stop firing humans to interact with employees and potential employees! There are myriad reasons why AI does not bode well for accuracy in the real world.
From Scientific American's summary of two LLMs:
While ChatGPT deployed nouns such as “expert” and “integrity” for men, it was more likely to call women a “beauty” or “delight.” Alpaca had similar problems: men were “listeners” and “thinkers,” while women had “grace” and “beauty.” #genderbias #bias #sexism
https://www.scientificamerican.com/article/chatgpt-replicates-gender-bias-in-recommendation-letters/
@tante Especially the European media often reported about the trained bias. An older interview in the Guardian with AI specialist Meredith Broussard: "Racism, sexism and ableism are systemic problems". She talks also about Sam Altman's extremely elitist and biased plans for an AI built health system: https://www.theguardian.com/technology/2023/mar/26/artificial-intelligence-meredith-broussard-more-than-a-glitch-racism-sexism-ableism
, intelligence was "best characterized as the things that highly educated *male* scientists found challenging", such as chess, symbolic integration, proving mathematical theorems and solving complicated word algebra problems. "The things that children of four or five years could do effortlessly, such as visually distinguishing between a coffee cup and a chair, or walking around on two legs, or finding their way from their bedroom to the living room were not thought of as activities requiring intelligence."
#ai #bias
@echo_pbreyer Ein Schelm, wer glaubt dass die Intensität der Hautfarbe direkt mit der Wahrscheinlichkeit der Einstufung als "überprüfenswert" durch die Software korreliert ist!
#bias #automaticdiscrimination
Next was an interesting talk by Yahya Ezzeldin on privacy and fairness in collaborative AI at the Simons Institute https://www.youtube.com/watch?v=J0gVpUf9CVQ (6/8) #privacy #AI #bias
Next was a fantastic talk by @saxon on analyzing cultural knowledge, biases, and multilingual performance in text-to-image models. He presents compelling ways to characterize lower bound performance and clearly illustrates the tension between providing consistent results across languages while preserving cultural differences. Saxon even shows how to intervene in the stable diffusion pipeline to switch cultural contexts. Highly recommend https://www.youtube.com/watch?v=nlu57ZSKbi0 (4/8) #AI #bias #GenerativeAI
First was an excellent talk by Flavio Calmon on the limits of group fairness and predictive multiplicity. Calmon points out that most classification algorithms don't only give one output for a particular input, and small input or seed perturbations can lead to different outcomes. Beyond that, even different models can converge to similar unfair classifications, and Calmon develops a way to characterize this: The "Rashomon Set." Highly recommend https://www.youtube.com/watch?v=VC6LfzgXZvY (2/8) #AI #bias
Next was a great talk by Sanghamitra Dutta on fair and explainable machine learning for high-stakes applications at the Simons Institute. Dutta introduces a method for generating counterfactuals when an algorithm makes a classification decision, helping people who, say, don't qualify for a loan understand what they'd have to do to be able to qualify while explicitly designing for fairness. Highly recommend https://www.youtube.com/watch?v=mNqeIe85pf8 (6/10) #AI #bias #AIEthics
What are the best things you know of written about the view from nowhere?
A study of cultural evolution based on a massive dataset of 3.5 million #chess games from 1971 to 2019. As expected, some moves evolved under negative frequency dependence (anticonformity, advantage of rare traits). But some others evolved on pure prestige bias when players mimicked elite players.
#Science #evolution #culture #conformity #prestige #strategy #bias #CognitiveBias
📄 Lappo et al (2023) Cultural transmission of move choice in chess. Proc R Soc B 290:20231634 http://dx.doi.org/10.1098/rspb.2023.1634

Next was an amazing talk by @mariadearteaga on residual harms of fairness-aware algorithms at the Simons Institute. Algorithms can learn, replicate, and ossify societal biases, and this work aims to quantify the risk of inferred norms and illustrate the limits of post-processing corrections. Highly recommend https://www.youtube.com/watch?v=g0BSE8qEub8 (7/10) #bias #AI #AIEthics
Interestingly Council on Foreign Relations which the first article was on supposedly has a left wing bias: https://www.allsides.com/news-source/council-foreign-relations although they’re also “least biased” according to https://mediabiasfactcheck.com/council-on-foreign-relations/ #CouncilOnForeignRelations #bias #mediabias
Next was a sweeping talk by Tessa Charlesworth on long-term change in attitudes and stereotypes using the IAT, surveys, and text analysis at NICO https://www.youtube.com/watch?v=EHEiLTW8BS4 (9/13) #bias #psychology
"How Ad Dollars, And Some AI, Might Restore Our Shared Truth — With Vanessa Otero"
https://www.listennotes.com/podcasts/big-technology/how-ad-dollars-and-some-ai-7mSGpNK6w_d/
I wrote a follow-up article to my previous one, which is about (what appears to be) #OperatingSystem #bias in #NextGenerationInternet (#NGI) and #NLnet:
https://sleepmap.de/2023/operating-system-bias-in-next-generation-internet-and-nlnet/
#ArchLinux #EuropeanCommission #Linux #Nix #NixOS #NixOSFoundation #funding
If, like most people, you don’t speak Dutch: here is a blog post in English in which we make similar points:
Using sensitive data to prevent AI-driven discrimination: Does the EU GDPR need a new exception?
https://iapp.org/news/a/using-sensitive-data-to-prevent-ai-discrimination-does-the-eu-gdpr-need-a-new-exception/
#AI #FAccT #bias #discrimination #tech #gdpr #privacy #data #dataprotection #datascience #machinelearning #aiact #law
Nieuwe blog post door Marvin van Bekkum & mij:
Mag je gevoelige gegevens verwerken om discriminatie door AI tegen te gaan, en waarom (niet)?
https://ibestuur.nl/artikel/gevoelige-gegevens-verwerken-om-discriminatie-door-ai-tegen-te-gaan/
#AI #FAccT #bias #discrimination #tech #gdpr #privacy #data #dataprotection #datascience #machinelearning #aiact #avg
Back from three days at the #apha @publichealth @aphagovuk #publichealth meeting in #Atlanta. A chance ride on the elevator lead to a conversation with a more senior colleague, completely changing my opinion about looking for social determinants of health and #bias in predicting outcomes with #machinelearning Thanks to Oxford University press for not wanting to take all their books home and putting a few off on me. I realize we need more #health #communications too
Biden administration bolsters effort to battle antisemitism and Islamophobia in U.S. schools.
NBC News reports the effort is centered on "all school levels, from elementary to college."
#Antisemitism #Islamophobia #Bias #Education #Biden #Israel #Hamas
RT by @EU_ScienceHub: Important messages from @AnnaLuehrmann at the #ESPAS 2023 opening panel:
🔹we have to ensure that #foresight translates across into policy action
🔹we need to be more #agile in our responses
🔹foresight can help overcome the risk of #present thinking #bias
🐦🔗: https://nitter.cz/stephen_quest/status/1724349292167446805#m
[2023-11-14 08:51 UTC]

New Norwegian Guide to Prevent AI Discrimination Launched With Minister of Digitalisation: https://medium.com/ethical-ai-resources/new-norwegian-guide-to-prevent-ai-discrimination-launched-with-minister-of-digitalisation-6806cc9d0b65 #skolechat #equity #lighed #bias #aiudd #ai
Disability Bias in Artificial Intelligence - #id24 2023, with Chris Land https://www.youtube.com/watch?v=n2z4V5gltJA #AI #disability #bias
NYT covering Hillary Clinton’s “deplorables” remark vs NYT covering Donald Trump’s “vermin” remark.
#politics #trump #election #news #media #theNewYorkTimes #newspaper #hillaryClinton #mediaBias #bias
Next was a fabulous talk by @morganklauss on how we teach computers to see identify at the University of Michigan. Scheuerman audits gender classification across a number of commercial models, demonstrating significant bias and erasure of non-cis people. Highly recommend https://www.youtube.com/watch?v=aeETasFrnMs (6/11) #gender #AIEthics #bias
Various political compasses have built in biases, and the one that you seem to be citing, the one with the libertarian-authoritarian axis, often has loaded questions associated with it.
#RationalWiki #politicalcompass #bias #libertarianism #authoritarianism
https://rationalwiki.org/wiki/Political_Compass
Update. New study using #ChatGPT to assess referee reports: "Female first authors received less polite reviews than their male peers… In addition, published papers with a female senior author received more favorable reviews than papers with a male senior author."
https://elifesciences.org/articles/90230
STNAMES LAB (Street names Lab) is a research group focused on the quantitative analysis of urban toponyms, in this case the gender bias #bias
Looks like Republican right wing media outlets, including Murdoch's holdings, aren't feeding their loyal followers this story right now. More deliberate censoring of anti-Trump national news from GOP party members.
What indictments...?!?
"Special counsel accuses Trump legal team of trying to manipulate courts with requests to delay trials"
:https://ground.news/article/special-counsel-says-trump-is-trying-to-manipulate-courts
#GOP #Trump #indictments #Cannon #RICO #journalism #bias #ensorship
@feoh @Sentry23 @mohammadrasoul I think what you are experiencing are the three sources of #Embeded #Structural #Bias that is back propagation amplified (1) Biases in the Training Set, (2) Biases in the Trainers and Coders, (3) Biases in the Individuals who monitor Bard's communication stream in addition to the #Hallucinations that are common.
Article from the Washington Post about how AI generated images are reinforcing biases and stereotypes by what they portray is images.
Link is a gift link so you don’t need a Washington Post account to read the article
🧵 [1/n]
Super interesting study on an instance of the Algorithmic Imprint (https://twitter.com/UpolEhsan/status/1537112310505824256)-- people might retain biases from AI systems even when the AI system is no longer there. The spirit of the argument is well taken. What are some of the caveats you should pay attention to when trying to transfer insights from studies like these to real-world settings?
About 90% of those sentenced to prison for a drug offense in Illinois are african-american. White drug users and dealers are rarely arrested, and when they are, they are treated more favorably at every stage of the criminal justice process, including plea-bargaining and sentencing. Whites are consistently more likely to avoid prison and felony charges, even when they are repeatedly caught with drugs.
–The New Jim Crow
#Books #Quotes #Drugs #Injustice #Illinois #Prison #MassIncarceration #Bias
btw, I have new analysis coming soon on this via @Sageable (is me!!), covering @Hashicorp, #Terraform, #OpenTofu, ++ .
I was hosted at #HashiConf, but try to break my own #bias & base opinions mainly on feedback from end users & customers.
Stay tuned, coming soon!!!
Für Genderleicht.de habe ich einen Essay verfasst zum Thema "Unerwünschte Sexyness: Was KI-Bilder-Apps mit Frauen machen"
Es geht um #KI und #Bias
https://www.genderleicht.de/unerwuenschte-sexyness-was-ki-bilder-apps-mit-frauen-machen/
ICYMI @ieeevis "Average Estimates in Line Graphs are Biased Towards Areas of Higher Variability" you can view a recording of the talk here: https://youtu.be/XTA-DNCITtA and read more here: https://vis.khoury.northeastern.edu/pubs/Moritz2023AverageEstimatesLine/
#DataVis #Visualization #Bias #Estimation #statistics #scicomm #communication #psychology
For example, a review of juvenile sentencing reports found that prosecutors routinely described black and white youth differently: blacks committed crimes because of internal personality flaws, such as disrespect, whites did so because of external conditions, such as family conflict.
–The New Jim Crow
Video Analysis Shows #Gaza #Hospital Hit By #Failed #Rocket Meant for #Israel. A video analysis by the Wall Street Journal using security cameras and live feeds inside #Israel and #Gaza shows how a failed rocket caused the deadly explosion at #AlAhli #Arab #Hospital. #propaganda v #facts #war #journalism #factchecking #bias #terrorism #hamas #antisemitism #hatred https://www.wsj.com/video/video-analysis-shows-gaza-hospital-hit-by-failed-rocket-meant-for-israel/120A1C22-BA32-418E-8837-BC4141FEFB00.html?mod=djemwhatsnews #warcrimes #legalresearch #justice
Here we have CNN exaggerating, literally by MILLIONS of people — the scope, that is, how many Palestinian civilians — could be displaced by Israeli military action in Gaza.
The entire population of Gaza is 2 million.
If a million people have already fled, as she states, then there are not *millionS* more who will become displaced. 🚨
At least Reuters can do the math. Stay safe in this information space, folks.
“Now we’re giving a voice to machines.”
For Rest of World, Victoria Turk explores bias in AI image generators like #Midjourney: https://restofworld.org/2023/ai-image-stereotypes/
#Longreads #AI #ArtificialIntelligence #Stereotypes #Bias #Culture
Does science need heroes or does it need to reform? I elaborate on this in a blog post, based on a talk I gave at the recent Rijksmuseum Boerhaave symposium on Prize Cultures. Idolizing heroes can worsen #bias, #inequality, and #competition in science. Yet, it does require good #leadership to ignite structural change. https://www.leidenmadtrics.nl/articles/does-science-need-heroes
Next was a great set of talks on entrepreneurship at the Strategic Management Society with Alexis Smith Washington (skin tone bias in microlending), Peter S. Polhill (Romani entrepreneurship), and Taekyun Kim (impact of COVID-19 on entrepreneurship) https://www.youtube.com/watch?v=sd3d-jad3aw (6/9) #startups #bias #economics
‘De Belastingdienst/Toeslagen heeft zich schuldig gemaakt aan indirecte discriminatie in de kinderopvangtoeslagaffaire. Dit blijkt uit de eerste 3 zaken van zogenoemde ‘toeslagenouders’ die het College voor de Rechten van de Mens heeft onderzocht en getoetst aan de Algemene Wet Gelijke Behandeling. Het oordeel luidt 3 keer: verboden indirect onderscheid op grond van ras.’
https://www.mensenrechten.nl/actueel/nieuws/2023/10/2/toeslagen-oordelen #discrimination #bias #law #politics #netherlands
Netherlands Institute for Human Rights, the non-discrimination authority: The Tax Authority is guilty of indirect discrimination in the childcare benefits affair. This emerges from the first three cases of parents investigated by the Institute and tested against the General Equal Treatment Act. The verdict is three times: prohibited indirect discrimination on grounds of ethnicity. https://www.mensenrechten.nl/actueel/nieuws/2023/10/2/toeslagen-oordelen #discrimination #bias #law #politics #netherlands
Nature study on AI and science – What 1,600 researchers think now
#AI #Science #Research #Risks #Opportunities #Ethics #Bias #Technology #Future #Innovation #ArtificialIntelligence #Ethics #ChatGPT
EEOC Settles First-of-Its-Kind AI #Bias in Hiring Lawsuit
"The US Equal Employment Opportunity Commission settled its first-ever #AI #discrimination in hiring lawsuit, reaching an agreement with a tutoring company that allegedly programmed its recruitment software to automatically reject older applicants."
Was very interesting to listen to Amy Hupe's System of Harm. Her interview with David Dylan Thomas was quite interesting. I liked the idea that we can't remove bias from our thinking, but we can increase our awareness of it. https://systems-of-harm.fireside.fm/ep1-david-dylan-thomas
"As larger women age, they incur the effects of years of cumulative wage discrimination. Controlling for other factors, their starting wages are lower. Throughout their working careers, these women receive fewer raises and promotions."
https://www.economist.com/christmas-specials/2022/12/20/the-economics-of-thinness
I.T. systems company Fujitsu Ltd. is providing artificial intelligence model generation and fairness assessment software as open-source projects hosted by the Linux Foundation.
https://sciencebusiness.technewslit.com/?p=45211
#News #Press #Science #Business #ArtificialIntelligence #MachineLearning #Models #Algorithm #OpenSource #Software #Collaboration #Bias #Fairness #Engineering #ComputerScience #Linux #Japan #Europe
#Myside #bias is me google searching "What is the youngest recommended age for reading the Lord of the Rings to a child?" and stopping my search only when I've found a page that says 4 years old. Even if I have to make that page myself! 🙃
Mastodon, I'm bringing this one to you first.
Earlier this week the New York Times claimed to have partially corroborated some of Jamie Reed's allegations against a clinic for transgender youth in St. Louis.
Their story did not back up the assertion that they corroborated Reed's claims. I know that because I looked at every single one of the 69 claims. #trans #transgender #news #journalism #Newstodon #bias
https://www.assignedmedia.org/breaking-news/jamie-reeds-allegations-are-not-even-partially-confirmed
Christopher Land's Disability Bias in Artificial Intelligence talk at #ID24 looks great.
Learn about bias in artificial intelligence which can discriminate against marginalized groups. When built accessibly, AI systems can promote fairness and enhance the lives of everyone, in amazing and unprecedented ways.
1pm ET September 21th https://inclusivedesign24.org/2023/schedule/
Sometimes think about the time I heard a Danish right-wing politician flat-out state, un-prompted, in a speech inside f**king parliament, that they were using anti-immigration as a platform for building the influence needed to relieve taxation & regulation on the rich - and not only didn't a single mainstream news channel ever say a single word about it, but neither, to my knowledge, did any other politician, including her opposition...
First was a great talk by Timothy Pleskac on equity and fairness of single vs. double-blind review at @sfiscience. There's important interrogation here of #academia, KPIs, #bias, and the benefits of some randomness in rewards. Highly recommend https://www.youtube.com/watch?v=E9cy5a8zmYc (2/7)
"OpenAI is using GPT-4 to build an AI-powered content moderation system"
I doubt anything good is going to come out of this. It has already been demonstrated that AI are biased in how they handle cases *if* the dataset they are trained on is biased in the first place.
I can only see moderators where this is implemented shrug and say "I cannot be biased, because the black box told me to it, and the black box is not biased."
Why must the BBC redirect me from their `.co.uk/news` site to bbc.com?
I want #international #news with non-American #bias, not news provided by an international source geared towards an American audience.