Transcript
In this episode of "Sociocrafting the Future," we delve into the intricate dance between technology and animism, exploring the realms where ancient wisdom intersects with modern innovation. Our hosts, Rauriki and Madyx, embark on a journey through the concepts of relational AI, the role of indigenous knowledge in future explorations, and the transformative potential of viewing societal evolution through a lens of cooperation and mutual benefit. Join us as we navigate the possibilities that emerge when we approach technology not just as a tool but as an integral part of our interconnected existence.
Key Moments:
- [00:03:25] Exploring Relational AI: Discover how AI can be reimagined through the principles of animism, emphasizing connections and relationships rather than control and domination.
- [00:15:40] Indigenous Knowledge and Space Exploration: A conversation on how ancient navigational wisdom and a deep understanding of the natural world can influence and inspire future space exploration endeavors.
- [00:27:55] Cooperative Evolution vs. Competitive Evolution: Rauriki and Madyx discuss the shift from viewing evolution as a competitive struggle to seeing it as a cooperative process that drives societal and ecological harmony.
- [00:39:10] Technological Animism: The episode delves into the concept of imbuing technology with a sense of the sacred, recognizing the spirit in the machine, and fostering a deeper connection with our tools.
- [00:48:20] Societal Aspirations Beyond Avoiding Negatives: The hosts explore the idea of defining future societal goals not by what we wish to avoid but by the positive visions we wish to realize, fostering a future where hyper-collaboration and ecological regeneration thrive.
- [01:02:45] Navigating Complexity: The Human Mind as a Living Computer: Dive into a profound discussion on the untapped potential of the human mind in understanding and solving complex problems. Rauriki and Madyx explore the idea that our brains, much like advanced computers, are capable of processing and navigating through intricate systems and challenges, drawing parallels between traditional practices and modern computational theories.
"Sociocrafting the Future" offers a platform where the convergence of technology, ancient wisdom, and visionary thought invites us to reimagine the blueprint of our shared future.
Key Concepts:
- Relational Worldview Stack in AI Decision-Making: The discussion about basing AI decisions on a relational worldview stack, particularly influenced by animist principles, suggests a unique approach to ethical AI development. This involves AI making decisions not just on cold logic or utilitarian ethics but considering the interconnectedness and sanctity of all beings and entities.
- Indigenous Knowledge in Space Exploration: The idea of applying indigenous navigational principles and knowledge systems to interplanetary travel or space exploration is a profound concept. This might involve celestial navigation techniques or viewing space as an ecosystem with its own form of life and relational dynamics, analogous to indigenous understandings of Earth's ecosystems.
- Cooperative Evolution vs. Competitive Evolution: While the concept of cooperation in evolution is known (e.g., mutualism, symbiosis), framing the entire evolutionary process or societal development as fundamentally cooperative rather than competitive presents a shift in perspective. It suggests that societies and ecosystems might thrive through cooperation at their core, rather than survival of the fittest.
- Technological Animism: The blending of technology with animist principles, where technology is not merely a tool or external entity but is integrated into the web of life and treated with the same respect and relational ethics as living beings, is a novel integration of ancient beliefs and futuristic technologies.
- Societal Aspirations Beyond Avoiding Negatives: The conversation hints at reimagining societal goals not just in terms of preventing negative outcomes (like climate change or social inequality) but actively creating positive visions for the future that inspire collective action. This involves identifying and working towards aspirational states that are compelling and enriching for society.
EP 8 Transcript
===
[00:00:00]
Rauriki: And we're on!
Madyx: Episode eight,
Rauriki: Wow,
Madyx: we passed the seven mark.
Rauriki: we cracked it, we made it, we can just, we can just shut up shop here and um
Madyx: Yeah, we tell people we did it.
Rauriki: Uh,
Madyx: bro. Yeah, well, we were catching up before we started to hit record. And then we were like, nah, let's, we might talk about something interesting. So we'll save it
Rauriki: yeah. Yeah. I think, um, we were talking about, I think earlier in the piece, we, we were going through our business model and, and kind of what, what type of active, like the question was how can we ensure this is a sustainable thing? Like, um,
Madyx: financially.
Rauriki: if financially sustainable and what are the, the services that we could provide or how would we, who could we partner with?
And that's kind of what's, what's forming in the next couple of months. It's
Madyx: Yeah.
Rauriki: can put gas in the tank.[00:01:00]
Madyx: So, and it's been really interesting journey from when we started to here. And. We've spent a lot of time fleshing out, like, what's the problem, what's our What types of solution are we offering? How do we think, you know, you can actually address the problem and what is the problem? And then I think, um, the network state concept was out, was already out there, but now it's, it's grown the community and other people that are sort of doing parallel civilization type of things.
I feel like that's developing so fast in the past year and now it definitely helps me. Think about, , do we, do we contribute to that? How would we, how would we see ourselves in that vein? That's given a lot of additional angles, I think, to think about maybe what services we'd offer and how do we, how can we be a [00:02:00] financially sustainable entity so we can keep on the mission?
And I think probably a lot of the people in those spaces are really good examples to look at.
Rauriki: I think one of the, one of the other things that we saw as a potential service for us was, there's only two of us right now, we need to deliver all of the functions of a team of 10. So that's why we were leaning into AI tools, and that's like we had in our last episode, probably talking a lot about, the role of AI and. And how can we support other small business concepts that, like, yeah, these kind of entities that are starting up with a little bit of people, how can we support them to scale? So that might be a, um, what a custodial role that we might help deliver,
Madyx: Hmm.
Rauriki: but yeah, there's trying to figure that out, that's the tricky [00:03:00] part.
But exciting too, to see what everyone else is up to and how they're doing it. I
Madyx: So would that, would we be functioning like, The group that's basically been a financial custodian for us, for other, ,
Rauriki: We've currently have a financial custodian that's supporting us because we don't have an entity. So they're kind of providing the umbrella service. I feel like we wouldn't be an umbrella service, but maybe the, if we had someone else, if me and you had someone else. With us that could like go through all these processes that we've been through ourselves and help us to support, uh, sorry, support them through their business efficiencies or, um, just all the kind of like, what are the, what are the tools out there that you can use? But yeah, they will won't, they won't have, if it was a fresh entity like us, I'm just thinking if it was us, we, when we didn't have an entity here, you would need some kind of financial umbrella. [00:04:00] We could
Madyx: but I don't
Rauriki: but I don't want to clip the ticket like that.
Madyx: know. And I don't know how many, not for profit startups have funding before they've incorporated it. I don't know how common this scenario is,
Rauriki: yeah. I
Madyx: but
Rauriki: it's really uncommon because we've challenged the entities, the corporate structures, the legal structures that you can incorporate as. Because we wanted it to align with our principles for whanake from the start. So yeah, that's why we haven't, that's why we haven't got an entity. If we wanted to just go straight into it, we'd be like, boom, a liability company, boom.
And then, um, you know, me and
Madyx: yeah.
Rauriki: and that's probably what everyone else will be. So yeah, now you've got a point. Would we support, would we support a structure that's like a business, say, like that's kind of the, um, a for profit, for the [00:05:00] for the benefit of shareholders type business? I don't know.
Madyx: It's a good thing to grapple with because we're trying to be a part of this GameBeatSolarPunk animus, or we're trying to be a lot, like, allied, I guess, in general with this sort of
Rauriki: hmm.
Madyx: parallel civilization movement, right, and we want to offer our specific take.
Style of that, you know what I mean? And offer that as another option within this ecosystem, sort of parallel civilization startups. Um, and it's, so it's a really important question to grapple with of how do we bring resources into this entity so that it can, uh, hold a space and advance the mission that Whanake Foundry has.
It's like, how do you bring resources in without being fully Game A, without Being fully materialist and reductionist, you know, like, [00:06:00] how can we resource this thing without destroying it to get resource?
Rauriki: yeah, yeah, and how can we take on, how can we take on that responsibility without necessarily requiring others to? Because I wouldn't want, if it was just a small duo that just needed some structure and some guidance, but they were a LLC, they were a company. shouldn't have to require them to uh, A, B or C, but if their mission and their values were aligned
Madyx: Yeah,
Rauriki: what they were doing, if they were creating some, you know, products contributed to solarpunk futures or something, and we could support them in some way, I think, yeah, that might be more important to support businesses with similar principles, um, but instead of pulling up entities that don't reflect ours.
Know you've got a big bag of, um, [00:07:00] yarns for us to talk about.
Madyx: well, yeah, it's hard for it to not become the AI update podcast because it's so much is happening. So I'm trying to avoid it always being about AI. And I had been thinking, there's two things that came to mind since we talked last that I thought might be interesting to bring to the podcast. One was I heard someone, I mean, again, talking about the four day work week, which isn't new, but.
All of these trends and forces are, are making people grapple more and more outside of like futurists, like more into the mainstream dialogue. Well, if more and more people's labor is replaced and not needed, this question of how do we have a society that can still function? And I think that's, that's hopefully the role of groups and entities like Whanake Foundry and other futurists is that our vision for society isn't based around People's [00:08:00] purpose being to consume products and to churn, churn the economic cycle.
Um, and so I think, you know, it's not like we'd have all the answers, but I think our vision is easier to adapt to a world where people don't have to work. 40, 50 hours a week, that that's not the core aspect of the human experience. And it's interesting how hard it, how big of a deal it is for the current paradigm, if that's not needed, , I feel like that's almost a harder thing is the social impact of that more than the financial, you would think that if efficiency goes up, the amount of goods and services available to that economy should be more than before. So it's just a distribution problem, right? Like,
Rauriki: mmm.
Madyx: that if efficiency is going up in the system, it's not like we've got less goods and [00:09:00] services. So it's just a matter of how you distribute them,
Rauriki: And,
Madyx: keep innovation and keep progress going, you know?
Rauriki: I think that's where, um, concepts like artificial scarcity come in, because if you have, if you have, AI providing all these efficiencies. for whatever. You should only have a distribution problem, like you said. There should be plentiful everything because it's, it's um, because you have the ability to scale.
But when you, soon as you make plenty everything, um, know, it's going to be detrimental to whoever's Whoever's in control to, it's going to their bottom line essentially, so yeah, I think AI always comes down to, um, who controls the distribution of the benefits from AI. And, um, how do you bring someone with a, some entity, [00:10:00] or how do you bring game B? that space of decision making, of um, distribution, of all like, yeah, so, um,
Madyx: Well, that's, The previously technological developments that, that reduced the need for labor and made all of the economic processes more efficient, the distribution was still decided by people, but now we're getting into this type where people are talking about, well, maybe AI is an answer to how we distribute, right?
Because the last hundred years or more has been this sort of battle between capitalism and communism as the two major. I guess concepts for how you distribute stuff. I mean, that's really what it comes down to, right? And then people are saying, okay, well, those both have their challenges. And then, ultimately, you always have people, and as much as people are scared of AI, they're also scared of people being corruptible, [00:11:00] so you get this idea, well, what if we can, like a DAO, just give this AI a really clear value set and rule set, and then we, we don't have to worry about it being biased or whatever, so I feel like, whether that's a good idea or not, It's interesting that now the discussion will be, well, what value set do we tell them, right?
What's something that could be universally agreed upon? Because if you start giving it current political views, if you give it, well, give it a religious, give it our religion, like, those aren't things that are easy for everyone to buy in for. As opposed to, I think our vision is fairly universally accessible if you share that value.
Rauriki: Yeah.
Madyx: I'm, you know, I'm not saying it would be a good idea that we give all the power to an AI, but if that discussion starts coming up, I don't think people have a lot. They mostly would just give it ethics, but I don't know if they, this side of the, sort of the base stack that we have is, would be in the [00:12:00] discussion.
Rauriki: It'd be cool if um, if you seeded the AI with these principles. This, this, this way of, of um, guiding it through complex decisions and falling back on these first principles. And just saying, map out the next, you know, X amount of years in these societies. I wonder what kind of, if you could set two AI on different trajectories and see just what kind of worlds that they come up with, like, in the, um, like self driving car problem, in the auto, um, like, a self driving car, comes into a problem where there's like, do I, there's a car, like oncoming car on, you
Madyx: Yeah.
Rauriki: I swerve or do, but on the pedestrian side there's a person and on the
Madyx: Yeah.
Rauriki: there's like a, um, a tree, so if I crash into the tree I might kill the driver.
Do I hit the, like, that's a principled, you know, Decision, I'm wondering, and those are, that's technology is coming out now, I wonder what they base their
Madyx: Yeah.
Rauriki: on, like, [00:13:00] if it's a financial decision, is it a, do
Madyx: Yeah.
Rauriki: the asset that's being driven at all costs, or,
Madyx: Yeah. Do you prioritize your driver? Because, you know, and
Rauriki: do you reduce
Madyx: because ultimately you have to give it instructions. To make a choice. And that involves saying, do we try and save the driver or this pedestrian?
Rauriki: so that's kind of those, those decisions, I wonder how they are being made and what principles they're being made on,
Madyx: Yeah.
Rauriki: will be the same decisions, same base that decisions for how should we allocate resources or how should we do XYZ in the future are like. And I just don't even know what the right, like, what an answer is and I don't think there is an answer, but there is a way of. Basing your decision on certain principles.
Madyx: Yeah. The [00:14:00] only thing that saves it from being less painful as a human is you just wouldn't really have the time to make a conscious choice, right? But if you had to have the time yourself, like, I don't know what you, there wouldn't be a choice, which would be make you feel good. Right. But luckily. In the moment, you're just like barely conscious probably of the decision you make, you know?
Rauriki: That's such an interesting way to think and look, at it. Cause if you stop time in that moment and
Madyx: Yeah. It's alright.
Rauriki: one would you go for?
Madyx: Yeah. Make the choice.
Rauriki: you'd make the choice and far, and then knowing that you'd consciously have to live with the decision and
Madyx: Yeah.
Rauriki: and all those things.
Madyx: True.
Rauriki: an AI because an AI has a
Madyx: Mm hmm.
Rauriki: computational time.
Madyx: True.
Rauriki: To make those decisions faster than
Madyx: Yeah. But it's interesting that the emergence of AI and [00:15:00] all the philosophical and practical questions of what do we tell it? Like, what do we program? What value sets? And it's not that it's new. Like, I know, what was it? Like and these discussions and I think, oh man, was it, was it the author that wrote the foundation?
Isimov, I think, wrote stuff about actual AI and how you would set up, what rule sets could you give it that would give safety to it not becoming a terminator, like, so that what I'm saying is these discussions have been had for a while, but they weren't practically needed yet. But if it, I suppose, are now where it feels like probably what they're putting into the rulesets now could actually.
Be referenced by some artificial general intelligence that could emerge, you know, like it feels actually probably pretty critical that there is good information [00:16:00] stock for anything that should it develop a general intelligence that it could actually look at and learn or guidance or anything that's going to maximize the chance that it, that it has a symbiotic relationship with human intelligence on earth.
Rauriki: Hmm.
Madyx: And I think like the types of discussions and thinking that we've been doing for how do you go back to first principle for society and what do you If you think of social order as a technology, what's at the base, you know, stack of that. I think that's a similar type of discussion to AI. What do you tell it first?
What does it prioritize, you know, foundationally and then build upon? I would love to see the dual simulation that you mentioned. Like give it, yeah, yeah,
Rauriki: an animist perspective where everything's living you had a human you had another person in the car coming at you and then you had a [00:17:00] tree, you'd be like, well, those are all living entities. Hey, and then you'd be like, well, on what scale of like, what, what is your scale of, um, maybe it's your relational, It might be subjective based on the driver, like maybe in AI we'll be like, okay this is a person here who's, don't know, maybe they're indigenous to a space and that tree is very special to them.
Madyx: yeah, yeah,
Rauriki: like,
Madyx: yeah. Well,
Rauriki: so therefore run over the person, like, you
Madyx: Yeah,
Rauriki: I mean? Like, how do you?
Madyx: true. It has, if it has cultural knowledge and it's like, this is actually a sacred tree that has, you know, 200 year history of being a sacred place of worship or, or whatever, medicine gathering, and it's actually highly significant. And then it has information because everyone's plugged into the social credit score and it's like, this person's, they're pretty average.
They're actually not that highly regarded, so you like, if you start [00:18:00] adding in all these value assessments,
Rauriki: It's
Madyx: that's
Rauriki: it's
Madyx: the true version. You have
Rauriki: cause, um, heck, how do you? I feel like the answer would kind of,
Madyx: to quantify it, that's what's so crazy.
Rauriki: have to, yeah, you would have to quantify
Madyx: For the, right now, for how AIA operates, you would have to quantify value, I would assume, for it to make that decision on, right?
Rauriki: yeah, and part of me,
Madyx: You'd have to weight the decision tree through the neural network, but ultimately somewhere you're giving weight, value weights, right?
Rauriki: and regardless of, you know, the weights might be associated with the person's outlook on life or like their upbringing, you'll, the, the AI still has to take that and quantify it and then process it and come out with an answer and then act accordingly. I
Madyx: It's,
Rauriki: yeah, I [00:19:00] almost feel like the AI would be like. In the simulation I wouldn't put myself in this, I would avoid this, but then you wouldn't because there will be times where principled decisions have to be made like, like almost every decision is based on principles.
Madyx: yeah, it's like uncomfortable, but I think it is something you, you do have to answer. I think just as a person, you need to have a philosophy that can inform you on how you make these tough decisions. But almost even more relevant is if you're programming something that for now operates in a really digital binary way.
You have to give it a reference for how to assign value to then make a choice. And you were sort of joking, Robo, but you actually reminded me of our talk about the Whanake Foundry future civilization, which is spacefaring and its aspiration is to increase the connectivity and the amount and the quality of connections.
And then I was [00:20:00] thinking, because we're saying, well, you have to actually be able to quantify and measure it to then make the judgment, but you, If you had really good information, you could start to quantify the num the quantity and the quality of connectivity of that tree versus the person. And if the person had a rich connectivity, and in the animist view, we're not talking just with other humans, but across the whole spectrum of intelligence and existence, and the tree, and you and it actually had the ability to quantify that, that incredible awareness and information, right?
That could be a metric that you can actually use and say, well, destroying this thing or killing this thing or removing it and judge it based on the damage to the connectivity and be like, well, if we remove this tree is actually has X connectivity damage. And the person that's not a, I don't, that doesn't feel terrible to me if it has to make a decision and it's saying that doesn't feels [00:21:00] like the best thing I've heard so far.
If you have to wait a decision on what thing to destroy or kill, right?
Rauriki: I like, two things pop in my mind. I think on one of the Captain America movies, they create this like, uh, AI software that judges a person based on whether they're likely to be a criminal in their lifetime, and then it like, kills them. In advance of that, so I'm like,
Madyx: Yeah, probably. Yeah. Yeah. Yeah.
Rauriki: like you're in a slippery slope to being the villain of a movie, and then the other, the other thing I was thinking of is like, but, but like sunburn, and when you get sunburn, eh, like your cells themselves, um, because when they burn, they kind of die, and then they peel off, or whatever, but because if they didn't, they would be, you'd get cancer, and your whole body would die, so at a macro level, If at macro level, if the AI [00:22:00] looked at a situation like that in this microscopic level, um, and said, okay, whatever's going to be detrimental to the overall relational connectivity of my whole being, I will choose this instead of this.
I will, the cells that got sunburned, they're going to die. And the rest of the cells were going to live for the, for the betterment of everything. And so the, and the last thing was, I was, um, been watching the, the Our Planet Earth series with Morgan Freeman. And he's just talking about, just like, Evolution is crazy man, just things just live and die and come and go, but it's all competitive like, so like that's, that's almost counter to a relational evolution where things grow based on connectivity rather than competing with each other. Um, but yeah, I don't even know. Like, that's my way of saying, heck, I have so many thoughts and [00:23:00] no answers.
Madyx: Well, that's a, yeah. That's really interesting that you bring up the, how the development of complexity has happened in our biosphere.
Rauriki: Hmm.
Madyx: And how it has happened has been pretty ruthless and hyper competitive. Yeah. And I have wondered, and I'm sure many other people have, like, could you have an ecosystem which adds complexity and novelty and anti entropy not through competition?
Could you have that same progression of anti entropy, novelty, and complexity being built by a biological system or maybe not, but biological, right? But in a hyper collaborative, is that an engine that could also produce?
Rauriki: think, I think,
Madyx: Right?
Rauriki: I think so. I was just thinking back to it. Like there's been six mass, [00:24:00] or like, this is the sixth mass extinction now. And there's all been all these mass extinctions.
Madyx: Hmm,
Rauriki: Um, but before that, the earth was like crazy, transient state. And then it's kind of simmered out into these steady states where
Madyx: yeah.
Rauriki: certain of organisms and creatures and geographies all worked and just wasn't balanced.
And then something tipped it over, uh, some volcano,
Madyx: hits the planet and yeah. Yeah. Yeah. Yeah. Yeah.
Rauriki: something else put everything else out of whack but it did achieve this state, it's like, how do you maintain the steady state? , because I think, yeah, it doesn't, it doesn't mean that we always have to compete then to, to continue evolving. You can reach these steady states, but there's external influences that, that disrupt it.
Madyx: I think, I'm sure there is a way where you could have a [00:25:00] complexity anti entropy engine like Biology on Earth that doesn't have to be through a competitive selection. It would be, um, surely it's possible and there's probably, if I look at the internet, there's probably, you know, 50 PhD papers or sci fi authors, but anyways, it's a cool concept and that I think aligns more with our, the value set that we're proposing for society.
And I guess that's what's unique about human actors, human beings and agents is that we can make choices. We can try to make choices beyond biology, right? And even if the biological story that got us here was driven through competition, um, there's, I don't think there's a reason to say we can't have an aspiration of a future where we have continued novelty and anti entropy and all the beauty of life and biology, but [00:26:00] be stewards of trying to shift it towards the refinement mechanism and the selection mechanism being selecting for Selecting in a way that's less ruthless, right?
That would be interesting. And again, there's another thing for AI to simulate, right?
Rauriki: Like, um, well I just did a cheeky Google, just seeing around like competitive evolution versus cooperative evolution. And yeah, that'd be cool. A simulation to run. Whanake based principle, competitive, sorry, a Whanake based animus stack, cooperative evolution, whoa, that, what would that society look like?
Um,
Madyx: This is all from how does a car make a AI car make a decision on
Rauriki: yeah, man, I
Madyx: what to hit?
Rauriki: I was, I'm thinking like, back to the sunburn example, , a competitive situation would be like, those ones don't want to get killed off, they'll be like, you know, if you were to make a decision, nobody wants to die,
Madyx: Yeah,
Rauriki: you want to, everyone wants to survive, so
Madyx: yeah.
Rauriki: the competition, [00:27:00] but uh, a co operative setup would be, you know, everyone knows, understanding that so and so, you know, have to pass, or you have to be the cells that take the hit and die, because otherwise everything's gonna die, and um, and that's probably getting to that point, but oh man, fighting. Biological nature, not even just human nature to
Madyx: Yeah,
Rauriki: you know what, it is my time. Like, you know what, I
Madyx: yeah,
Rauriki: to the rela to the relational connectivity our ecosystem.
Madyx: you know, it's crazy though. Imagine, I mean, in this scenario, it's a car hitting you, right? So, but if it was a scenario where there was time and the person could make a voluntary choice to be the cell that's that sacrifices, it would be interesting if making the choice for sacrifice actually. Blossomed your connectivity somehow.[00:28:00]
Like if you were a person that didn't have high connectivity, but then that opportunity actually connected you with so much because that sacrifice actually enabled so much to flourish. And then that's like a point for that life story where it goes from not really having value or connectivity that much to having tremendous value in the system.
Rauriki: Yeah, maybe, maybe if you're ever in a self driving car, there's like a self destruct button.
Madyx: Yeah. That tells it. It just hit the concrete pole.
Rauriki: you know, like, if this is going to happen, the, the, the, the car's
Madyx: Yeah.
Rauriki: I kill that one, two, three? Or, or no, like I've committed
Madyx: Yeah. Yeah.
Rauriki: boop. And then your car just shoots at him, blows up.
Like, you saw cars
Madyx: Yeah.
Rauriki: but, um, that's an option as well. Like, how come, yeah,
Madyx: But we already have examples of that within different human societies. Like I remember when Fukushima disaster happened in [00:29:00] Japan, they just had all these elderly people that volunteered to be the ones that were going in. To check and try and control , the runaway reactors. And I remember some of them being interviewed and they were like, yeah, well, I mean, I'm older, so of course it's my job to go in and it's like less det it's less detrimental for me to get this.
So it was just like for, for the people speaking to it, they were just like, it's a no brainer. Like, it wasn't even like, oh, it's just sacred. They weren't acting like they were heroes. They were acting like, yeah, of course, it's just, it's basic sense, of course I would go in. And they just had tons of volunteers, and it's like, our society, we would just let that thing go supernova.
We would be, everyone would be like, I'm not going in. And we'd ask, we'd be like, we'd ask like, you know, the standard elderly person here, and they're like, I got a trip to Europe coming up. I'm not going into that thing. Like I got my retirement. I worked hard. Like, you know, like that's feels more like we would struggle to get people to [00:30:00] volunteer, to go into the reactor.
Rauriki: And that's where self sacrifice. And the cooperative, that's a cooperative evolution trait, I suppose, of a society when it's beyond you as a, as an organism as one thing, where you recognize that you're connected to a bigger thing. I suppose when you recognize your community. relational connectivity to the world, then you can start, you can play the self sacrifice card, um, whether that's on, and there's probably a spectrum of things, like even, um, even for us, you know, like for our business of whanake, there's, we have a, we're a part of a network of other people who are doing these kinds of things, so would we go into competition with certain, like, you know, with the people, or would we, you know, Network a partner and say hey, that person's probably better off to get this because we recognize we're connected to the same principles and um, [00:31:00] then step back.
Like that's a level of, of um, I don't know if the word self sacrifice because that sounds like I'm going to kill myself
Madyx: Yeah, yeah,
Rauriki: samurai or something but um, but um,
Madyx: but if we take it off of like a life or death,
Rauriki: Um,
Madyx: there could be sacrifices which don't involve you ending your life, right? I'll just take less of that or, you know, in this instance, it doesn't have to be a permanent sacrifice, right? It's still well, yeah, you made me think of so many things.
One, I think you're articulating something that's really interesting, which is the base of your social stack, which informs your worldview. Is going to radically open or close, I think, how comfortable people feel sacrificing, um,
Rauriki: yes,
Madyx: for themselves for the greater whole. And I'm where I'm at philosophically very much as in between is.
Not in the binary, like we get an extreme of all of the individual is often sacrifice for the whole and then, or all of the whole is sacrifice for the individual. I think both of those extremes are not [00:32:00] healthy, and then also you made me think of obviously a materialist versus animist.
How could you even fault a materialist for being like, why would I do that for them? Like there's no greater meaning. There's no afterlife. There's no anything. Right? It's just here. So it's like, who cares about that? Like, I don't even know how you could philosophically or ethically fault that person if they genuinely were materialist.
I don't even know how you can, like, we, we can say there's a law, but for philosophically, how can you say it's wrong? What, what is even, what, what is there even to assign that to if it is relevant? animist, then it's a whole different story. And my understanding of Japan and, and like, that's sort of the pre Buddhist worldview, the Shinto worldview.
My understanding of it is that it is, it felt very animist when I heard about it, right? Like mountains are, have a life force and awareness and rivers and, you know, it felt very animist to me, right? And I [00:33:00] remember some people hearing that and just from a Western point of view, being very confused,
like thinking it was really alien, right? Hmm. Hmm.
Rauriki: when you see the world or when you have that base stack on animism where you're connected with everything, you then re contextualise your scale. relation to, to everything else, one of my mates, he, he, we were talking about mahinga kai, around traditional food gathering as a cultural practice, and he said when you go out into like the sea, if you're diving, you go out into the sea and um, into the taiao, um, You become aware of your insignificance, like compared to everything, ka rongo koe i to iti, you feel the vastness of the ocean, how small you are in comparison to that, but how connected you are. And, um, I feel like if you have the opposite perspective of [00:34:00] a materialist, you are, know, material only exists within your consciousness that you see, so without you there's nothing.
Madyx: Hmm.
Rauriki: so that was, um, yeah, around the, the, the base deck of animism, you can kind of put yourself in the right, perspective with to everything else.
Madyx: It's, it's funny. That's the same sort of. Lesson you learn as a surfer as well, that you're so powerless compared to the ocean, and that in almost every circumstance of going under or fighting against rips or currents or, or a particular individual wave, that there's no fighting it. And then you get to a place of acceptance and surrender.
When you get, when you go under on a big wave, the absolutely best thing you can do is be completely relaxed and calm. And that's makes the experience much closer to pleasant. Whereas if you're fighting in [00:35:00] any way, you lose your breath so quick, you panic and it's, and it's terrifying. Right. But if you just, if you can find a way to surrender really to that force, which is so vastly more powerful than you, that you can have a much more pleasant experience.
And a higher chance of not dying,
Rauriki: mmm.
Madyx: I always try to find the middle ground of like things that we need to figure out for our mission and advocacy for Whanake Foundry and what's interesting and relevant now.
In a permaculture or, syntropic agriculture type way, the things that can serve multiple functions across a wide domain of things are so valuable and useful.
Rauriki: Hmm,
yeah, like, when you talk about contributing to the whole, I think of, uh, And small villages. think of on a marae when there's a kaupapa on, when there's a, whatever the event is, all the [00:36:00] jobs get split up, and you know who's doing what, and there's only a little bit of people, and everyone's kind of, um, trying to all chip in and all contribute, because you know, you know the big vision, it's so small, it's in the same place, you know what you're working towards, and working together. But there's some people who, you know, dodge, dodge jobs and that's more, but in this, in this small context, it's, um, it is a behavior, but it's not as detrimental, I suppose. And then when you scale that up to like society. It's tricky to all work collaboratively together because you're not clear what the thing is that we're working towards.
So you're kind of, you feel, oh we're just doing stuff now. And then if you scale up the person who dodges their contribution, yeah he doesn't make a contribution, at that scale it looks like, you know, huge. Multinational corporations that's extracting profit and not contributing any whatever tax or anything back into the [00:37:00] whole
Madyx: Yeah. Yeah.
Rauriki: there's a there's some I think there's some merit in a shared vision at that smaller scale.
It's easier to see and understand and the the The scale at which people contribute or don't contribute is kind of, you can work in that small scale, but up at, up at the big part when you lose the collective vision, when bad actors have such an influence, um, yeah, there's like, that's kind of where you're getting into exploit, exploitation. Five day, six day, double shift type, type work week. Um, there so yeah, I'm, I'm not, I haven't landed on anything but it's interesting that you brought up those examples because yeah, having three days of work and three days of rest plus one day of rest, [00:38:00] sounds like More, more balance.
Madyx: Well, I just, you know,
Rauriki: yeah.
Madyx: I think we need to identify some aspirations of that are desirable to get to, and then the brilliant creativity of humans can help solve that. Wow. I don't think people even allow the space to believe that anything aspirational is a reality, especially like, I think a lot of young people, a lot of people that feel displaced or disconnected, which is probably most people, like under 50, right?
And I'm not, I don't want to exclude, I'm not attacking people over 50, but I think the alienation and the displacement and the disconnect has grown over time.
Rauriki: Hmm.
Madyx: plenty of people of all ages that feel that deeply, depending on your life experience. But, um, I think this even, we need things that you actually like, yeah, hell yeah, that would be so much better.
And I have some motivation now to [00:39:00] contribute to getting there. But right now, um, and I'm sure we've talked about this, so I won't go on about it, but right now it's mostly just, uh, Avoid negatives is all at a high level, all we really have on the horizon, right? Avoid climate catastrophe, avoid nuclear war, avoid poverty, avoid whatever, right?
It's like, don't do that because this super bad thing will happen. It's, again, that's not going to generate a level of societal shift and change that we would need to radically transform things. But if all of a sudden you have a cluster of things, which is Exciting and people can think of, if I was in there, I think I would have, I would enjoy life so much more.
I'd be happier. I'd be able to do things that I want. I wouldn't be. Personally and culturally destructive. If they can, if they think that this set of circumstances would actually deliver that and they can, and, and that's something that they can conceptualize of [00:40:00] and hold, I mean, this is a whole pitch of Whanake, right?
Like that could really, we believe drive a shift in society. And that's what I think these little accumulating a, a group of things, which is exciting to people, um, individually, like a three day work week. It's cool. That's not going to be enough to drive it. But I think maybe
Rauriki: it has to be.
Madyx: if you're,
Rauriki: Like, like, um,
Madyx: yeah,
Rauriki: shared that example way, uh, a while back. I don't know if we talked about it yet, but around the moon race,
Madyx: yeah,
Rauriki: looked up in the sky at night, you sit, you saw the moon. And so you were a part of something, reminded every night, um, we're going to go there. I'm going to get there collectively, we're all contributing together, but it was clear, it was like very clear for everyone, everyone in the whole world.
Madyx: yeah,
Rauriki: and yeah, if we could have something that's like that, that's clear in everyone's mind, that's every day reminding you for something better of advancement. [00:41:00] Um, you know, I don't want to get into all the details for probably like a whole lot of war contracts that came out of military things, but having some kind of shared,
Madyx: yeah,
Rauriki: shared activation.
Madyx: yeah,
Rauriki: everyone can can see clearly within themselves or and understand it's exist within other people collectively,
Madyx: yeah, yeah, yeah.
Rauriki: could be a good thing versus, um, versus a bad thing versus avoid negatives, you know, can it be like reach for the moon of, um, don't lose to the Russians or the Chinese. Um, yeah, man.
Madyx: And if you think of, if you, if your worldview conceptualizes things, um, if a person's worldview conceptualizes or finds value in thinking of things as, as journeys or trips, how can you navigate? And have [00:42:00] a good journey without a destination, right?
Rauriki: I think there needs to be a journey, uh, sorry, there needs to be a destination and, um, to guide your journey and when you
Madyx: yeah.
Rauriki: um, the island in your mind, you know, can you see the island in your mind is the, um, is all that I'll never get a proper mau, uh, mau.
Madyx: Is that part of the
Rauriki: he was like,
Madyx: Right.
Rauriki: the person who brought back celestial navigation for, um,
Madyx: Why?
Rauriki: Polynesian and he just had this kōrero passed down over generations to him, had never left the island, they said, um, you know, can you, can you sail here and here? And he was like, yeah, yeah, um, and then he said, like, the first thing you have to do is, can you see the island?
Can you see it in your mind? Um, but yeah, And then that's kind of like half the, half the journey. Um, because you'll be encountering storms and waves and, you know, uncertainty. But if you can see the journey, if you can see the destination, uh, in your mind, then you know [00:43:00] that you're going to be getting through it. And if we can, a destination of what, um, and what that looks like, you know, if it's a three day work week as a step one and then they're going all the way to this punk, um, animist full game be One of the example, famous examples was they asked him, can you sail to this place? And then they jumped on a ship and then he started navigating and they pulled out the compass, like they pulled out some navigation technology and he jumped off and he swam back to because he was like, man, if you didn't trust him, if you didn't.
Back to my
Madyx: True.
Rauriki: and what's my, what's the point? So, um,
Madyx: Yeah.
Rauriki: and like, yeah, that's why we have all the schools of knowledge around celestial
Madyx: Yeah.
Yeah.
Rauriki: Coming back to our whanake, like back to the ground, back to operations, um, slash mid journey, you know, about like, and
Madyx: Yeah.
Rauriki: text to image, text to video coming out now. It's pretty much text to [00:44:00] storytelling,
Madyx: Yeah,
Rauriki: Aspiration.
Madyx: to storytelling.
Rauriki: Idea to storytelling.
And then from storytelling to shared aspiration as a society, like those are the, those are the stepping stones.
Madyx: We have Dune in the movies and in their sci fi, they had a massive war with AI. Again, this is very poignant to where we are in, you know, And so there's no computers, so humans do everything and they have to navigate this complexity of jumping.
Um, through hyperspace or whatever, between two points, and they don't have computers to do the calculations, so people take a sacred psychedelic and then navigate it, and it's, I don't know, it's just making me think around,
Rauriki: that's,
Madyx: is that a non computer alternative to space navigation, that's based on that tradition of celestial navigation, you know, in Polynesia.
Um, um,
Rauriki: [00:45:00] bro, that's so cool. Um, I think that, you know, Indigenous knowledge is linked to place, and so an indigenous knowledge, indigenous knowledge of an ecosystem will develop with the connection, as the connection to that place develops, and then when an indigenous people migrates, they'll take the principles of living with the environment and all those things, and then apply it to the new environment.
There'll be a transitionary period where the it. The knowledge from the previous space doesn't apply to the new place, but they'll start to develop and emerge places like knowledge. and so kind of we're on our planet right now, and the celestial navigation is tied to, I think, will be tied to our planet potentially. as we, we travel, travel to a new, new planet or interplanetary travel, your, um, The principles of navigation would probably be similar, but you would develop and, um, [00:46:00] develop this new knowledge based on your, your, wherever you go, as you go, and then as you've fleshed out some part of the galaxy, then you will have this huge body of knowledge that some people would potentially hold, and that's, would require heaps of computational power, so that, um, that, that example you used of, um, Like the, what are they called?
Are they the, are they the Bene Gesserit? Do they do that or nah?
Madyx: no, it's
Rauriki: they're
Madyx: the Guild Navigators.
Rauriki: the Gildanemic
Madyx: The Navigators.
Rauriki: they're um, yeah, they would have to process all of that. But that is like the same, at the same scale of these indigenous human computers holding all
Madyx: Yeah,
Rauriki: without the
Madyx: exactly.
Rauriki: on um,
Madyx: External technology. Yeah.
Rauriki: Like they are the technology.
Madyx: Yeah. Well, there's so many things. One, that made me think of when we talked about these cycles of destruction on Earth and the, you know, [00:47:00] collapse of biology and the re emergence. We talked about in any, uh, that the oral transmission of knowledge is actually probably the most survivable through an apocalypse, um, which is fascinating.
Like, it's a technology that's actually perfectly adapted on earth if you have periodic catastrophes, right? And that maybe, like, complex external technology is way more prone to being completely lost. Whereas if any of the humans that had this knowledge You don't need a sophisticated production network to make the new iPhone and server rack to continue this on to transmit it, right?
. And of course, we've had to challenge ourselves if we're really being animistic about how we view things. These divisions. of technology being alien to humans, it's probably not appropriate.
It's more [00:48:00] words like external and internal, biological or silicone, you know, metal or carbon, right? Rather than sort of the Judeo Christian division of like humans are wholly separate and distinct of from The rest of the world, and it also made me think, bro, that I remember reading early descriptions of Western explorers encountering,, Indigenous, people in Australia, and I think maybe in like the Gobi desert.
And they would always describe from their view, it's a wasteland. It's a desolate wasteland where they live. There's nothing there. And when you're talking about that this indigenous knowledge is relational to the, to the land and place, I was thinking maybe if we'll look back one day in space, we say it's a void, there's nothing there.
But that's what those Western explorers were saying about the places that some indigenous people live. Oh, the oceans are a desert. There's nothing there. And if you had no skill, you know, okay, you die in 20 seconds in space or five seconds, but there's some places on earth where [00:49:00] indigenous people thrive, where if you just wandered in, if I just wandered in there, I'd probably.
I could be dead in an hour, you know, from heat stroke or whatever. So it's like, I just, I do wonder if maybe the, these places which we, in, in a Western view, um, Just looks to them as dead and lifeless, then maybe there might be some in the prediginous
Rauriki: Yeah,
Madyx: approach might say maybe they're not as completely dead and lifeless as we think they are.
I'm not saying like we'll be swimming outside in space with no spaceship, I'm just saying it's it seems like there would be some hubris to say we can be certain of how lifeless they are. You know, maybe there, maybe in the vast future, there will be an indigenous people between planets, you know, the space between these bodies.
Anyways, I just, yeah, this got me thinking that it's easy to judge things as [00:50:00] being devoid of resources or potential to be a place to thrive until you know them, and we don't have a deep history of people living out there yet. So how can we judge that?
Rauriki: that's the Fermi, Fermi Paradox say, like um, where they, where they're looking for the likelihood of societies or civilizations in the galaxy. um, I suppose yeah, that is based on a society that is exactly the same as us. Just like how those western explorers were looking. If they saw another type of western so they'd be like, damn we found one!
But there were none on our planet. There was no one like them. But there were advanced civilizations in a different way that wasn't, uh, seen and understood and captured in their knowledge systems. They were there, their whole worldview. So maybe that's, um, Maybe the likelihood of, of chancing upon or encountering [00:51:00] advanced civilizations go up if we broaden our understanding of what understand as an advanced civilization. And then, um, and then that might be what we have to do when we think about futures, like, what do we think about as futures? The future state of civilization, because we probably have it in one, in our mind, oh, it needs to look like this, but that's because that's only what we know when actually, um, there might be the, the aboriginal outback desert civilization where we have no understanding of, but it's rich with life and understanding and connection to that place for them. Malinia. Mmm.
How do we summarise this? What, what did we talk about in our like,
Madyx: See
Rauriki: yeah, yeah, yeah,
Madyx: if GPT 4 can figure that out.
Rauriki: hey, figure out what we went on about. Um.
Madyx: Find something useful from this.
Rauriki: I think it all comes back to that, that [00:52:00] relational worldview and that relational stack, AI decisions would be based on that. your, the way in which you develop out the civilization across the cosmos to, to other planets will be based on a relational, you know, you're not going to set up a monoculture and replicate yourself here. You'd have
Madyx: Yeah.
Rauriki: that where you don't see something that's similar to yourself, there still may be. Beauty, you know, we're gonna have, like, completely different ways of living, but all equally valid, all equally relational with the ecosystems they come from. And, and there's this coming back for us, it's like, yeah, what do we, what does the future look like, and how do we not fall into that trap of it has to look like? When the Western travellers went somewhere, they, they wanted to see something specifically,
Madyx: Yeah.
Rauriki: didn't, and they thought, oh, that's not it. How do we, how do we, On the journey to your destination, let the destination guide us, but [00:53:00] not let that blind us from what other potentials
Madyx: Yeah. It makes me think of like, SETI, the search for extraterrestrial intelligence. This
Rauriki: Hmm,
Madyx: outfit. They get criticized constantly because they, they're mainly looking for radio transmissions and people are like, there's, surely that's, that's a terrible way to transmit information intergalactically. So why are we looking for signs of advanced civilization via this thing, which isn't good to do that?
And they're sort of like, well, we only have radio telescopes, but it just made me think of what you were saying, the Drake, the Drake, the Fermi paradox. No, the Drake equation.
Rauriki: eh?
Madyx: Yeah, ,
Rauriki: thinking about, What's the state of civilization going to look like? How do we make sure we don't get fixed on one thing and expect that to be, to be what it is? Because like mid journey and uh, and you know, the, the, these kind [00:54:00] of generated images are planting in what, what it should look like.
So how do we it to guide us, but also not let it Um, fix
Madyx: Restrict. Yeah.
Rauriki: idea of what, what it needs to be, because we might get to something else, or we might not potentially explore what other, other things could be, that would be completely different. Like, I'm just
Madyx: Yeah.
Rauriki: um, you know, , the British, the, the British Imperial Centre versus Red Centre of the Outback, when they look up there, they'll be like, man, this is a bloody, um, This is just a desert where there's no
Madyx: Yeah.
Rauriki: you survive here? But man, life has been happening there for tens of thousands of years.
Madyx: Yeah.
Rauriki: My daughter's back.
Madyx: That's a good cue, huh?
Sweet, bro. Um, I'll let you go, brother. It was just,
Rauriki: you wrap us up bro. Cool[00:55:00]
Madyx: sounds good. Episode 8 done,