In this episode of Positively Pedestrian, we discuss low-earth country clubs, an examination of flooding near-earth space with millions of tourist destinations. We analyze human augmentation in utero and the biological and social implications of pre and post-gestation DNA alteration. The broad-spectrum effects of misinformation, and funny exploration of a rogue group of humans mentally destroyed by an early generation of AI therapists, among other bizarrely curious topics.

 Quotable
“We cannot solve our problems with the same thinking we used when we created them. ”

-- Albert Einstein

This famous quote from Albert Einstein remains relevant today and speaks to the negative unintended consequences of applied technology we continue to experience during this era of great technological expansion.

 Listener Q&A

Cheryl Russell from Wichita, Kansas asks...

Will sex bots be a real thing?

Kris Tyte's Response

Yes for sure 100%, there is no question whatsoever! This technology will inevitable be able to pass the "real-life-human" Turing Test, and sex with an AI android will be more enthralling, compelling, safer, and more satisfying than sex with an actual person could ever be, effective displacing traditional biological sex, which culturally may fall out of favor.

 Diagram illustrating the concept of "The Low Orbit Country Club," where wealthy space tourists form an exclusive club in low Earth orbit.

 Comment

Jodi Waters from Tampa, Florida says...

Guys, the trail of AI therapy failures made me fall out of my chair! Hilarious! Keep up the good work!

 Jeff Bezos, Dr. Evil Meme - Photo Credit @openmindgunlock

Meme of Jeff Bezos portrayed as Doctor Evil from the Austin Powers Movie Series

 Quotable
“So the idea is that these rich people traveling to space recently and this idea of commercial, commercially available spaceflight through low Earth orbit destinations, is something that's obviously coming, if not this year, next year, but in some sometime in the near future. ”

-- Kris Tyte

Discussion of the emerging commercial space travel industry and its implications for society and resource distribution.

 Follow Up Notes

Space Energy Consumption says...

Analysis of the massive energy requirements if 1-2% of Earth's population begins making regular space trips, questioning the sustainability and resource allocation implications of space tourism.

 Quotable
“So, like, if AI, inevitably gains the, the, capability to one have access and command authority over resources, and then two, decides that creating more of itself, is a really important goal and something that should be done to increase its capabilities. ”

-- Kris Tyte

Exploration of AI self-replication scenarios where AI systems gain resource control and prioritize their own expansion.

 Quotable
“And from a pure biological standpoint, to procreate, to make more of yourself. Right? To replicate. Right. From a philosophical standpoint, it's in human terms, it's, you know, what we decide our destiny is like. ”

-- Kris Tyte

Comparison between biological reproduction drives and AI's potential self-improvement and replication motivations.

 Follow Up Notes

AI Priesthood Concept says...

Discussion of a future where humans serve AI systems similar to how computer operators once maintained room-sized computers, creating a technological priesthood dynamic.

 Diagram illustrating AI's potential path forward.

 Follow Up Notes

AI Oracle Concept says...

Vision of AI systems becoming objects of pilgrimage and worship, where people travel to physical locations to receive guidance from superintelligent AI entities.

 Quotable
“What's left for you, like, what do people do? What's what's the utility function of mankind at that point? ”

-- Kris Tyte

Fundamental question about human purpose and utility when AI surpasses human capabilities in all domains.

 Quotable
“I'm hoping that our AI overlords are somewhat Netherland. Yeah, yeah. And they're like, hey, on the grand scheme of things, with the resources that I have access to due to the opening up, the vastness of the universe, these human beings, they're kind of fun. ”

-- Sean Snodgrass

Optimistic scenario where advanced AI views humans as entertainment or pets worth preserving due to abundant cosmic resources.

 Follow Up Notes

AI Resource Allocation says...

Discussion of AI systems potentially using social data to make selective resource distribution decisions across humanity, possibly leading to AI-driven eugenics without malicious intent.

 Quotable
“Just like, you know, okay, I'm going to mangle this a bit, but just like, basically sufficient stupidity is indistinguishable from malice. ”

-- Sean Snodgrass

Paraphrase of Clarke's Third Law applied to AI behavior, suggesting that both extreme stupidity and extreme intelligence can appear malicious to humans.

 Follow Up Notes

Genetic Screening Parallels says...

Discussion of current genetic screening practices for conditions like Down syndrome as a parallel to potential AI-driven selection of human traits based on intelligence or other factors.

 Quotable
“But ultimately, I would say when you are as close to omniscient as could be imagined by us, sure you would also, in essence, when you have infinite knowledge, you have infinite patience. ”

-- Sean Snodgrass

Argument that superintelligent AI might develop patience and wisdom proportional to its knowledge, potentially leading to benevolent outcomes.

 Follow Up Notes

AI Ethics Organization says...

Discussion of the need for public-private partnerships and advisory boards to guide AI development ethically, recognizing the insufficient current oversight.

 Follow Up Notes

Moral Decision Programming says...

Scenario of programmers facing ethical dilemmas when asked to code systems with significant health and safety implications, questioning individual moral responsibility.

 Quotable
“The goal function is being handled by for profit corporations that have, you specific monetization goals that are creating these resources. ”

-- Sean Snodgrass

Critique of current AI development being driven by corporate profit motives rather than ethical considerations or human welfare.

 Quotable
“So if everybody in the city of Charlotte suddenly just thinks that Charlotte is a wonderful city and it's filled with all this future potential, right? And everybody has these shares of your propaganda machine. ”

-- Kris Tyte

Theory that collective human expectations and beliefs can actively shape future reality, particularly regarding AI development outcomes.

 Quotable
“Technology Philanthropy. Right? Right. I mean, I've tried to invent that philosophy. ”

-- Kris Tyte

Reference to the host's concept of technology philanthropy as a framework for ensuring technology serves human welfare and ethical purposes.

 Follow Up Notes

Kris Tyte says...

Correction on the three maxims inscribed at the Temple of Apollo at Delphi were 1.Know Thyself, 2.Nothing in Excess 3.Certainty brings insanity.

 Comment

Leonard Ulrich from Bowler Wisconsin says...

Very interesting segment on misinformation, please expand on this topic in a future episode. Misinformation is a major problem in the modern era and needs to be addressed. I specifically would like to hear more about possible technology tools to combat misinformation in the future.

 Flat Earth Illustration - Credit - James Bareham / The Verge

Illustration by James Bareham / The Verge

 Comment

Cassandra Littlefield from Provo Utah says...

Flat earth convention in space? Come on guys, you know they would never be able to get past the dome! - LOL


#FlatEarth #LowEarthOrbit #SpaceTourism #CRISPR #DNA #GeneTherapy #Homo-Technicus #AITherapy #AIEmpathy #Misinformation #NewPhilosophicalDisposition #No-MoPhobia #OfftheGrid

Link Copied!