12 Comments
User's avatar
Miriam Ferfers's avatar

I’d like to add a different angle here.

We don’t need to “defend” humanity against AI - because AI is not a foe. It’s not “bad.” It’s powerful. And that power cuts both ways: enormous opportunity, and enormous risk.

Which side we experience depends on us.

*AI will become what we steer it toward, what we permit, and what we neglect.* If we remain passive, it will grow in directions we didn’t choose. If we take responsibility, it can amplify our best capacities.

AI is not an enemy. It’s a mirror of our responsibility versus our irresponsibility, of action versus passivity, of steering versus letting ourselves be steered.

That’s why I argue we shouldn’t frame it as “defense.” The real question is: how do we act now, while we still have agency?

More in my latest piece: "The Manhattan Project of Our Time"

https://open.substack.com/pub/miriamferfers/p/the-manhattan-project-of-our-time

Expand full comment
The Society of Problem Solvers's avatar

The title was more shock value. We are for AI - as long as it isn't mak big decisions wthout human epathy and love involved. Like all powerful tools (including the collective human "swarm" intelligence we advocate for) it can be used for good or bad. We see an absolute use for ai even within the swarm intelligence realm.

Expand full comment
Miriam Ferfers's avatar

And about the title: it wasn’t just “shock value” - Sam Altman, CEO of OpenAI, created the Manhattan Project analogy himself.

Expand full comment
Miriam Ferfers's avatar

I see your point, but that’s exactly the issue: AI already makes big decisions without empathy or love, because it never had those capacities in the first place. What looks like “judgment” is just probability and logic. And unlike real swarm intelligence, where independent agents bring diverse perspectives, AI systems only replicate patterns at scale. That’s why the danger isn’t future potential - it’s what’s happening right now, invisibly, in credit scoring, sentencing, hiring, and governance.

Expand full comment
Leon Tsvasman | Epistemic Core's avatar

Absolutely fascinating lens — and beautifully summarized.

Nature offers dazzling images of collective organization. Bees swarm, ants coordinate, fish and birds move in perfect synchrony. It is tempting to project these metaphors onto our own society: order without dictatorship, efficiency without coercion.

But here lies the risk. What works in nature secures survival — it does not create human becoming.

https://open.substack.com/pub/leontsvasmansapiognosis/p/beyond-swarms-toward-sapiocracy

Expand full comment
The Society of Problem Solvers's avatar

Would have kept reading if it wasn’t pay walled

Expand full comment
The Society of Problem Solvers's avatar

Can’t we define our own becoming with creativity? We have been part of human swarms before. They are real.

Expand full comment
Leon Tsvasman | Epistemic Core's avatar

Yes—but only if creativity remains a function of subject-autonomy, not a byproduct of swarm metrics.

“Human swarms” are real: crowds, markets, hashtags, wartime fervor. They are coordination patterns around a fixed viability function (attention, safety, profit). They yield synchrony, not subjecthood.

Key distinctions:

• Agents vs. subjects: swarms aggregate agent signals; subjects originate judgments.

• Aggregation vs. orientation: swarms optimize; subjects orient (they decide what is worth doing).

• Horizontal synchrony vs. vertical time: swarms synchronize now; subjects keep decisions revisable across time.

So the task isn’t to deny swarms, but to bound them: use them for logistics, never for meaning. Build architectures where AI acts as enabling infrastructure—clearing redundancy, protecting minority potential, and measuring coherence (fit to reality), not mere consensus.

Expand full comment
Mark Drury's avatar

If AI is regulated, those at the top reap all of the benefits. If unregulated, might be anarchy, might be bliss. With the proliferation of computing power and models, the latter wins.

Expand full comment
Michael Ginsburg's avatar

VERY VERY Interesting!

You definitely got my attention.

How do we set this up?

Expand full comment
Smacko9's avatar

Hi,

Just spotted this, have yet to listen, though thought it may be of interest?:

https://healthranger.substack.com/p/ai-and-economic-liberty-will-decentralized

-----

issue w AI & Humans, remains Accountability!

Going thru time, our most vital/valuable commodity, distraction can cost us dearly

------

John McCarthy (1927-2011): Artificial Intelligence - Thinking Allowed

https://youtu.be/Ozipf13jRr4

excerpt:

McCARTHY: "Well, that's right, and what I believe

is that if it takes two hundred years to achieve

artificial intelligence, and then finally there is

a textbook that explains how it's done, the

hardest part of that textbook to write will be the

part that explains why people didn't think of it

two hundred years ago, because we're really

talking about how to make machines do things that

are really on the surface of our minds. It's just

that our ability to observe our own mental

processes is not very good and has not been very good."

https://www.intuitionnetwork.org/txt/mccarthy.htm

------

The obscure we see eventually. The completely obvious, it seems, takes longer.

Edward R. Murrow

Expand full comment