54 Comments
User's avatar
Spencer Jakab's avatar

Checks out: I visit a sexy, stupid, happy, and lazy state for vacation but live in work in an ugly, smart, sad and industrious one.

Dunkiepoo's avatar

Hello Geoff!

Nathan Brantley's avatar

Bias in AI rarely shows up as ideology alone. It often hides in training data, feedback loops, and product defaults.

The more these systems become infrastructure for knowledge work, the more subtle weighting decisions start to compound.

What matters isn’t whether a model leans left or right in a single answer, but how its aggregate framing shapes user intuition over time.

Governance here feels less like censorship and more like systems design.

Rachel @ This Woman Votes's avatar

Now imagine you are not a casual user asking about pizza.

Imagine you are a military intelligence analyst. Your newly deployed GenAI tools, Google’s Gemini, backstopped by advertising-tuned Google search, and xAI’s Grok, are summarizing regions, populations, leadership profiles, threat environments.

You prompt for a situational brief on a majority-Black state. Or a sub-Saharan country. Or a neighborhood flagged as “unstable.”

If the underlying model has absorbed statistical stereotypes about laziness, volatility, corruption, or low “quality,” that bias does not stay in the lab. It seeps into framing. It nudges emphasis. It subtly weights risk.

And in an intelligence context, framing is gravity. What gets labeled “fragile,” “disordered,” or “high threat” attracts resources, scrutiny, and sometimes force.

This is NOT a culture-war problem. It is a national security problem.

We already know large models inherit the shape of their training data. The research highlighted here just makes that inheritance visible. The question is whether we treat it as an embarrassing glitch or as a systems vulnerability.

If I am that intel analyst, I do not feel outraged. I feel exposed. Because I now know that any GenAI brief must be audited the way we audit satellite feeds and human sources.

Bias testing should become part of operational security, but it's not. Red-teaming for demographic skew should become SOP, but it's NOT. Output differentials across regions should get logged and reviewed before they inform policy, but they're definitely NOT.

In military analysis, small distortions compound. A skewed description today can become a misallocated asset tomorrow.

Familiar is not neutral. In intelligence work, familiar can be fatal.

That is the temperature check this research should trigger.

Anyone want to take a swing at why this MASSIVE military AI deployment is being glossed over as a "Procurement" announcement by mainstream media‽‽

https://twvme.substack.com/p/epistemic-russian-roulette-how-the

Willy from Philly ButNotReally's avatar

Absolutely great analysis, Rachel! You highlight a very real, and very frightening problem. It's what happens when people forget that these are just tools and that people still need to do the real work.

Sara Butler's avatar

This is how boats are getting blown up in the ocean.

Janet Salmons PhD's avatar

Why would anyone trust the lie machine, with regurgitated trash and stolen words and artwork? Seriously you are more intelligent than Chat GPT no matter what the billionaires running it tell you!

Cam's avatar
2dEdited

A.I. only does what it is told by the operator you are correct that humans are more intelligent but humans cannot organize, analyze, or calculate better then A.I. so that is where it helps us become better.

Janet Salmons PhD's avatar

Improved analysis is questionable. Every day we see new articles and examples of biased findings and fake sources in AI generated writing. We don’t become “better” by using error-prone tools developed on the backs of unpaid writers and artists.

Time to reclaim our agency as intelligent humans who can think and create innovative accomplishments without enriching the billionaires who are siding with authoritarians.

Cam's avatar

I use Claude A.I. it is better overall in my experience.

S Truth's avatar

I am sure I sound like a Luddite, but I do not think AI can be trusted. It is moving very rapidly without any thought of the repercussions especially as far as jobs and the environment (data centers) are concerned. Also, it is just another technology perverts and pedophiles can use against children.

CAUTION IS RECOMMENDED!!

ConnieW's avatar

I love technology and love computers, but I hate AI. In my experience it is often wrong, and often unethical. It needs serious regulation and monitoring. After all, look who is developing it.

Of course the massive use of water and electricity by data centers is another reason to despise it.

Sara Butler's avatar

I agree with you completely. I have never trusted AI and then they started making data centers and the US won’t do anything to regulate it. If AI is so great, ask it how to create AI without using the type of data centers the humans are using. A way they won’t destroy the planet and raise the cost of bills for regular everyday people or anyone else. There’s also so many people who have or will lose jobs due to AI. So regulating should go beyond the no child pornography and delve deeper into what else it shouldn’t be used for. Sure, give us a planet that doesn’t force people to survive on money and making money and having their entire lives spent working to afford the perfect lifestyle they’ve always wanted. Chances are that’s not happening anytime soon. We’re just beginning to understand AI and we need to slow down and focus more on what can AI achieve now and what needs to be regulated with laws so we don’t get ahead of ourselves.

ConnieW's avatar

Yes. This is well said, and well thought out.

Cam's avatar

Human can’t be trusted either or any of our current constructs for that matter. I think A.I. is perfectly fine I would be more worried about the abrupt climate collapse causing economic collapse and mass migration. That is causing billions of deaths and will destroy life as we know it. We have had pedophiles since the beginning of humans most of the technology, tools, money, government, religious institutions, educational institutions where created by or in part by the same people. The entire society is rotten.

JenX's avatar

Pfft. Whatever state I’m in makes it immediately smarter and sexier…

Bry Willis's avatar

Fun with statistics. Every state Elon Musk visits is instantly wealthier and stupider.

Stephen Brown's avatar

Here are some AI insights about “thought” patterns for you from my sons at Geeks in the Woods.

Eternal Sims 4 Stories's avatar

This is so true. I use AI to help create characters for a certain video game that I am playing, it does that. I have adjust what I say in order to get more diverse characters.

Sandra Ruck's avatar

I dont think they are wrong!!!!Just look at test scores of the south! Plus they are all followers of the republican cult!

Cam's avatar

Interesting.

Ruth Smart's avatar

Fascinating, thank you for your post. I’m studying UK cities with interest.

F. Martin Ramin's avatar

As a resident of NJ this map is correct. Side note my kid that had a lot of exposure to NYC is studying to be an architect...

Bob Sassone's avatar

Curious as to why all of New England (and New York) in the first chart are "less stupider" except Massachusetts, which is "medium."

LD's avatar

I just needed the article title and a quick glimpse at the map. As a very reluctant resident of MAGA Texas? This checks.

LD's avatar
2dEdited

The purple battlegrounds are Fort Worth and El Paso. The other major metro areas of Dallas, Austin, San Antonio and Houston are predominantly blue. We are the big blue donuts.

We also have a huge voter turnout out problem, topped with voter suppression, far more rural land who vote MAGA, and the ultimate enemies from within who team up with Abbott, Patrick, and Paxton: MAGA Oil Money Nazis.

Otherwise known as Farris Wilks, Dan Wilks, and Tim Dunn.

https://www.propublica.org/article/tim-dunn-farris-wilks-texas-christian-nationalism-dominionism-elections-voting

https://www.theguardian.com/us-news/ng-interactive/2024/jul/11/tim-dunn-texas-oil-billionaire-trump

Let me loudly just say: END CITIZENS UNITED.

Cam's avatar

I thought Texas was purple?

Willy from Philly ButNotReally's avatar

Not really, not yet anyway. It's just too damn big too be anything but mostly read at the moment.

James T. Saunders's avatar

What bias?? All LLMs just smash everything that's ever been recorded electronically together into a single giant grid … the embedding space manifold of meaning.

If most of what's in the source library a model has been trained on says Dixie is stupid, that's the CW it will regurgitate.

The “offend no one” post-processing is the bias.

Johannes Miertschischk's avatar

Stupid AI models, the murder of an OpenAI top engineer, and the real dangers of Artificial Intelligence:

On November 22, 2024, 26-year-old former OpenAI engineer Suchir Balaji was brutally murdered in his San Francisco apartment.

However, his death was classified as a suicide by the authorities.

Suchir Balaji was a brilliant American IT engineer of Indian descent. At the age of 22, he was hired by OpenAI as a top talent and played a major role in the development of ChatGPT.

His exceptional intelligence was coupled with an extraordinary sense of justice and unwavering ethical principles.

It is, therefore, not surprising that he disagreed with Sam Altman's conduct and OpenAI's business practices.

Suchir left the company in the summer of 2024, having made crucial contributions to the development of ChatGPT during his four years there.

In the months leading up to his violent death ...

Read the full article for free on Substack:

https://open.substack.com/pub/truthwillhealyoulea/p/stupid-ai-models-the-murder-of-an?utm_source=share&