Transcript

EP 5 - Human Novelty AI Iterative Symbiosis
===

Madyx: [00:00:00] And we can, if we remember, we can do an official start. But,

Rauriki: that was my, that was my informal, it's our official intro.

Madyx: That's your official start? Yeah. There's a I was thinking about a few different podcasts, and there's like, you know, some are so hyperproduced, and some are just like, just start, and people are talking, and you have no idea,

Rauriki: that's how we start.

Madyx: I don't know where that's going. It's I guess it's just that There's a nice range you can do whatever and there's someone else you can be like, well, that's what they do and

Rauriki: Ao Maori

I don't actually know how, how many podcasts that, it has a song, it does have a song, so, um, but yeah, I was just I was just talking about kind of our, our whanake processes and then developing AI and just kind of working on those AI tools that allow us to do more with less time, um, and scale up our capacity, I think, and free up our capacity at the same time, um, our [00:01:00] thinking capacity.

So, um, yeah, it's been some crazy things, crazy potentials. But I know that, yeah, I know we've, I don't know, this ain't the topic for the, for the podcast. Hey, what AI tools are you using for transcribing your meetings? Um, but yeah, that's just something that's happening in the background.

Madyx: nice Yeah, I was, you know, like, of course, like, gender debate and stuff is always in the news, and it's in, like, our work, and I was just thinking, like, sometimes, I was thinking, sometimes I look at it, and I'm just like Um, this is even like, worth engaging with, but then it's like, or I could take the approach of like, how do I get this to a point where it is like, valuable, you know, rather than just like, making a judgment on where it is currently, which obviously is not gonna

Rauriki: I, how do [00:02:00] I fully appreciate or utilize the value within this? And how do I use it to contribute to a greater purpose that is not, that's, you know, a whanake purpose. Um, cause man, like, just the ability to go onto ChatGPT and, like, if you, I had a, like, I was trying to do something, like, manually for, like, in one, once.

If I did one, and I was trying to do it a hundred times, it would take, like, ten minutes. But then I just asked ChatGPT, hey, how do I code this, and how do I make it faster? And I press the button, boom, it's done. Um, you know, transcribing, we had a family meeting. Um, and we were like, okay, we established a trustee who's taking the minutes, and I was just thinking, I will, because I will, I'll use, I'll use AI to just, to just go through and then tease out the key motions like, You could ask, when you, uh, any time someone passes a motion or uses a specific language around motions, then capture that and output that.

So yeah, that's, that's just how [00:03:00] to experience, so how to capture the value and yeah, make it work.

Madyx: Yeah. How do you make it worth, or how do you make it valuable, rather than just go, oh, this, you know, oh, this content's average, or, you know, it's like, how do we get it to a point where it is adding value, and how do we apply that in our work, and that sort of thing, and like, I guess the other thing about AI is, it's amazing at doing certain things right now, that might change, but for now, it's really good at, like, iterating on things, Processing things, right?

And so if we can provide it like raw material, but it's very iterative, right? So if it doesn't have novelty, if it doesn't have fresh, uh, what is it like feedstock, you know, for it to work with, then it can become like a little bit like repetitive and samey. And so then it, so then I'm thinking, well, like, you [00:04:00] know, How do we, because I think I, I, we may have talked about this and I wrote it down, starting to talk about it.

So if you look around at the patterns and what's happening in civilization, it's like people are becoming more and more robot. Well, they're being forced to behave in a way that's very machine like. And then we are trying to get machines to be very human. And there's, you know, it's a lot of people have made this joke, which is amazing, but Why do I have to prove to a robot online that I'm a human?

Like, why am I proving to a robot that I'm human? And so it's like, and I was thinking about that, it's like, that's a, all those little captures and like picking the things, that's like an Not infinite, but that's a massive data set for robots to understand how to prove they're human. It's like we're, it's literally feeding data on how would a human

Rauriki: yeah. I think that's a, there's a

Madyx: So there's all these little like weird, uh, yeah.

Rauriki: it's, it's the most human or, or something like that. But, but [00:05:00] yeah, it's crazy 'cause that's, that's such a

Madyx: Yeah,

Rauriki: man. I've worked jobs and I'm like, they're this jobs. I was just thinking this, this could be automated.

This could literally, if you had like, um, you know, com computer, uh, imagery or what, what does they say? Um, what's, what is it called? Is it image capture technology that you have now? You could just read the files and then just, just read them in and just catch, like it could all be, it could all be processed out.

So Yeah. I've, I've felt like working like a robot. So every time I, I feel like, am I. Am I being robotic? I always think, can AI do this faster than me, because why would I run to the next town if I could drive, and that's just like, unless you were wanting to achieve some other benefits, you know, I want to improve my handwriting, so I'm going to write this 10, 000 word essay myself, or, I'm alright with handwriting, I'll just type it up, so yeah, I'm trying to figure out [00:06:00] when, where, um, AI can come in and I think, you

Madyx: Yeah.

Rauriki: There's a, there is a bit of a barrier to add, like a barrier to entry into, um, taking the robotic jobs that you do or the robotic things in life and then automating them out.

But the scary thing too is you could be automated out as well. And I think if it came to paying AI or paying a human to do a job that's a. Monotonous job. In our society, in this Game A society, you're gonna get AI'd out, so it's um, I think we've had conversations before eh, about how do we lean, how do we lean into the non AI kind of capabilities of humanity, and that used to be like the arts, the arts was something a human, like a robot couldn't do, and now you've got generative AI,

Madyx: Yeah, I saw like AI stand [00:07:00] up comedy and it was four guys and they it was like the four comedians were there or five of them and then they would play like this AI comedy and I think it was it's pretty nuts. That would be one of the last things that a year ago I would have thought was at risk and they were all like man What almost all of them said is the delivery is not as good like the cadence and stuff But the writing is already better.

They were and they were nailing their style like they were in like hysterics. They were just like You're you know, they're all just talking trash to each other like you're clearly replaced. And so it's an interesting It'll be interesting

Rauriki: I think in a Game B future there's definitely a role that's, how do you, like, the big waka, the big canoe that is AI that's steering, going hard out into this Game A future, [00:08:00] like what is steering it into Game B? Like, what is that even? Mm,

Madyx: Yeah, and we're not, um, we're not saying that there's one solution or, or whatever, we're, we are, Whanake Foundry, I think, is working on this, like, uh, positive future that isn't, um, What's the word? It's not like a luddite aspiration to abandon all technology. It's like, how can all of these components of civilization be positive, be net positives for all of the components of it, and we talked last week, I think, really excitingly about, um, what are, how would an animist spacefaring society or relational based society be?

Explore the cosmos, what would drive it, versus like an extractive one, and that, and that was really cool.

Rauriki: You know, the movie Warley when they're, uh, all the, they're on the spaceship and everyone's on self-driving [00:09:00] floating chairs and all the foods made. And , that's kind of a dystopian, it's a, it's a humorous dystopia where you, you lose your humanity, uh, and everything, ai.

It takes over everything and does everything. So what is, um, what is a human centered, I suppose, and maybe an ecosystem centered, planet centered, uh, expansion? Ooh, I don't like the word expansion. It seems overtaking. I think we talked last year, uh, last year, last week about the balance between Yeah, so maybe, hmm,

Madyx: Yeah.

Rauriki: yes, relational,

Madyx: I guess instead of expanding, like we were talking about connecting, right? Like the, like, how do we connect, connect with more of the components of the galaxy? Whether that's star systems, or civilizations, or ecosystems on other planets, like, can that be a shared [00:10:00] aspiration for that civilization to increase the connectivity?

Yeah, we were talking about, we were talking about that, like, it's a measure, how do you measure, Progress on this, this wisdom or alternative, the social Kardashev scale, how do you actually measure progress, because it's easy for them to measure energy harnessing, or it's easier, and like, we're talking about how, can you just measure

Rauriki: um, and just kind of going off that back into our AI yarns, it's, it's like how can we, we're here, we're creating content, um, we have a limited amount of time in the day, how at an operational level now do we lean into AI to create, to help write out content and, and capture, and so that's kind of one of the things that we've, um, been trying to look at.

I'm not trying to say that I need to write that blog for the next session real soon, but I haven't done it. But, but that's something I'll, I'm trying to figure out how do we scale, how do we use AI to do. Um, [00:11:00] one of the things that, one of the other things that we were talking about for our podcast today is um, reacting to video content.

I would love to put a video, video up right now and just like, kind of um, just dissect it. But maybe that's I don't know, I think we, we've, we've been asked about why we haven't chosen to put our faces out there. Do you think maybe we, we talk a bit about that? A, one of the things that we've wanted to do is,

Put forward the kaupapa, and not necessarily put our face to it, because I think one of the things that I was thinking about was that when you put a face, then the whole kaupapa is associated with the face, the whole whatever you're talking about is associated with the face, and the brand, and the influence of the person, and then if you take that away, then the kaupapa can't maybe survive or thrive, so it's trying to figure out how could we just have [00:12:00] People coming and going, but the kaupapa maintaining itself.

But yeah, that was one of the reasons. There's probably more reasons as we unpack them.

Madyx: yeah. The continuity and it just seemed to align better with a

Rauriki: Mm,

Madyx: decentralized, more resilient, more forest like system structure was that it's not that the individuals aren't important, but if, yeah, we're trying to make it in a way where a lot of the momentum isn't lost as people come and go,

Rauriki: One of the other reasons I think we chose to kind of remove our face from it was also to lean into, um, technology and AI providing the services that people would otherwise do. So we were, like, we were coming [00:13:00] into a middle where, um. I think we got asked who takes the minutes for our meetings.

We did get asked that and I was thinking, Oh man, like Literally, we can get an app in every meeting to transcribe now, and then we can tease out all the main motions and then put it into a Word document, so that isn't, with our limited resource, we can't really allocate a whole person to do that, but what we can do is, um, pipeline that Stream of work and, um, and that kind of work being managed by an and an an avatar.

And that's kind of why we've got our, our silicon avatars. Um, someone that, that's our administrator or some, someone that does the video editing process. Um, and then we would work with the avatars with the generative AI as a, as an avatar for a graphic designer. And we would work. As if they were a graphic designer, we would work with them and say, this is what we want here and here and here.

Um, that was one of [00:14:00] the, I suppose that was why we have our kind of silicon based team members. And we would be, us as the carbon based team members. But yeah, I think, um, the other, the other reason we didn't want to put our faces out there was because the structure that we wanted to operate, we didn't want to have a legal entity where we were the directors.

We want to And where we would make all the decisions and where we would be the, the, kind of, the leaders, we would want, we would want this, the eventual establishment of a network, and anyone in that network could be the face, um, both, both of the podcast or of the brand, um, or of the actual operations of the, the, the, kind of, business or operational aspects, everyone had a kind of equal say, and so that was us trying to navigate that tricky space where we, If you want to do anything, you always, someone has to be the face of it, [00:15:00] um, and with that comes a lot of responsibilities and preconceptions of what a kaupapa is, um, so yeah, it's kind of, I just Well, now that you have a Cho

Madyx: Yeah. That's what I was thinking, is that You know, we're here saying, we're exploring, you know, different futures, or, um, we're futurists, you're futurists, think tank, or what, uh, different ways of describing, um, what we're, what we're actually doing. And then if we just did everything in the same traditional way, it seems like it's, it's not that valuable to add into, there's a lot of people doing it the normal way, where, if you look around, a big part of, Everything that's happening now is like, each person is a brand, their identity is a brand, like, it's a, it's gone so extreme because anyone can make content and that's really, there's huge positives to that, but it's also [00:16:00] like, everyone, their personal, their personal Characteristics, everything about them is, is their brand, is their business, it's all sort of mixed together.

And so we're just, I guess, trying an alternative where, um, our name and our personality and all that isn't the brand. And I think you nailed it. Let's, let's try other things. Like, You know, we're, we're in a place, and we're in a situation that's not working well, so why would we keep trying to repeat the ways of doing things that have gotten us here?

And there's also the punk aspect of the anonymity and being more comfortable to speak really honestly, and especially during a transitionary phase um, if you have to, if you want to try and make this battle, or not battle, but if you, if you, If you want to speak freely, and sometimes it's going to be critical of game may, or it's going to be open and honest, Um, [00:17:00] I think it's more likely that anyone, that us and anyone else that comes along will feel

Rauriki: mm Yeah.

Madyx: be honest and true with what they express,

and that's one of the most important things, I think, if you're trying to chart a new path, you know, and you're worried about someone saying, Oh, I saw you, I heard your podcast, you said X, you know, that's against company

Rauriki: it's a

Madyx: policy to have a view on whatever, you know?

Rauriki: um, and hopefully it can be a safe space for others, because we were thinking it would be cool to, um, bring people on and talk to people at some point, once we've kind of fleshed a few things out, but, um, it would be cool to maybe protect their identity, or if they were open to, you know, sharing who they were, but maybe they would be able to share more, things more freely, um, but like, I think of Sia, you know, um, the artist, you know, she's Has her hair in front of her, no one knows who she is type of thing, but if you google see her, you'll find her face.

It's not like, it's not trying to be [00:18:00] completely anonymous, don't want anyone to know who we are. It's just that what we're putting out there, we don't want that to be immediately associated with our faces, with who we are. We want it to be associated with the mission, the objectives, and the inspiring vision.

And anyone can be a part of that, and anyone can lead it.

Madyx: That's a very good point,

Rauriki: Love thinking of the forest analogy. I think because we live next to forests. It's real It's real easy for us to connect to. There's a track mix, um, on the mountain range here. Um, there are a lot of kauri trees. We've got a few, about three big ones, um, on the way and there's a few others in

Madyx: Oh yeah.

Rauriki: it's um Yeah, it's kind of like, all trees have a role, and there are multiple types in a flourishing forest, there's kauri, totara, rimu, kaikatea, wherever, um, and if one of them falls, you know, if the forest continues and flourishes and it's a part of the cycle, so that's kind of like, if the whole forest relied on one never falling, [00:19:00] Um, then, then that'd kind of be, uh, that wouldn't be a thriving ecosystem, but when a kauri falls, or when a small kawakawa plant, whatever it does its thing, the forest comes through and so it's kind of, um, yeah.

Build our kaupapa like a forest.

Madyx: Yeah, that's, that's interesting in an analogy of setting up Whanake Foundry so people can naturally come and go. But then I'm thinking about your analogy and, you know, when a, when a giant tree falls down. Um, or leaves, I guess, but it leaves with it this immense mass of nutrients, right, that are then shared from its individual, even though there's a lot of, uh, coming and going in the metabolism of the individual tree, more or less, those resources are in itself, and when it moves on, when it dies, it falls out, [00:20:00] and then it returns this immense amount of building blocks for other things to play with and to live off of.

Rauriki: I think whanake must be like the mycelium. I think you've had a few yarns in there, all the soil within, and its ability to retain what you've shared in your time here, uh, and its ability to share that and distribute that amongst others so it can continue thriving.

And we, we, um, we recognise that in our kind of individual workspaces. We were talking with a lot of people and some people were like awesome people coming and going, but then there was no continuity. Something holding that all together.

Madyx: Yeah, it's a classic. Yeah, it's a classic for everything. Every passion project, every Alternative that isn't making billions of dollars from game A is that they all are often driven by someone who's really passionate and then something changes or they die or they need to earn money or [00:21:00] whatever and then the momentum's lost and then someone else is like hey this problem's here and they're probably going to come to many of the same conclusions and then that's it.

Without the continuity, then I'm going to input all this time and energy to come to the same conclusions 10 million other people have come to, instead of just going, Oh, wow, they got here? Okay, like, let me keep going. You know? And that, like, the kaupapas can have, the missions can have,

Rauriki: Mm.

Madyx: they persist, and then people can come in, and people can leave, and we don't waste too much of the progress, you know?

That's, I guess, was our strategic thinking around. Because this is this like, it's not an easy task, game A is dominant, game A is really effective, we have to be so clever about whenever anyone can put energy into alternatives, you don't want to waste any of that progress or energy that is able to be put in, and sometimes it's short periods or in [00:22:00] bursts, right?

And I think that was really important. I mean, again, not a unique realization, but we're trying to establish a structure through the tools we have now. DAO's and not, not making it about a person and building a brand that's person agnostic or

Rauriki: Mm. Yeah.

Madyx: person or AI Agent, you know, like we're trying to I was thinking about that when you're talking about the agents as well Which is a trend that we just see there's already agents But we think that it's likely it'll be more and more like interacting with a being whatever that is, right?

like it'll be probably more like More like interacting with another person, um, as time goes on to get services out of different AIs, right? And that just, in one way, it's just adding a lot of fluidity if it's not If we don't make it super relevant, whether it's, uh, carbon based intelligence or silicon or whatever, [00:23:00] if, if, in Whanake Foundry, if, in how it presents information and shares value, if it's, if it doesn't make it

Rauriki: Mm.

Madyx: if it's a carbon silicon based, then it just allows more fluidity, and more

Rauriki: I think one of the important

Madyx: useful for sustaining long

Rauriki: the distinction that it doesn't matter if it's carbon or silicon, the immediate response is, oh, so if robots are all working, and what's everyone else doing? You know, what, it's taking everyone's jobs? But that's in a Game A context. Because when it's in a Game B context, they'll be relieving, you know, work and labour so that people can do what people want to do because there is no need to do those things.

You can live, you can play, you can thrive because the labour's been outsourced to AI, or collaboratively done with AI. And that's kind of, um, why it shouldn't be as, within a Game B context, the [00:24:00] difference of who's doing the mahi. Uh, shouldn't really matter. You can do it if you want to do it, or feel fulfilled, or if it has to be done, and it doesn't matter if it's carbon or silicon.

But as long as the outcomes are going back to the people, community, ecosystems, and not being shipped, all the outcomes are being shipped off to shareholders or another country like they currently are. Then, um, then yeah, some, that's something that's worth exploring and thinking about.

Madyx: It's so interesting how, like, it's like, our whole modern economy, Game Aids, It's built around all these leakages of value, where value is, people are inputting their time and their value, and then value is being created, and then a lot of that value is not going back to the place, the land, or the people that in large part created it, right?

And then it's going somewhere, and there's like, [00:25:00] all

Rauriki: va, va value sinks that are just

Madyx: right? And these

Rauriki: Yeah. Draining, draining wellbeing.

Madyx: yeah. I think, um Yeah, so that's been our thinking, I think, around why we don't highlight our names. And like you said, it's not like it's, uh It's not like it's a deep black, you have to work for the CIA to figure it out. We're not hiding it, we're just not promoting it, right? Like, you could do probably a little internet research,

Rauriki: yeah.

Madyx: the one or two people that ask us and figure it out pretty quick, right?

It's not hidden, but it's not highlighted,

Rauriki: Like, that's,

That's single handedly changed the whole lot of technology and how we can interact and why we can have these conversations around potentials for Tao and if we have a whanake, a worldwide whanake network, um, Then, like, working on the top of a blockchain, and having these types of principles, uh, shared values around animism, Game [00:26:00] B and SoulPunk, would just be wicked, cos then, ah nah, it would just be, it would just be choice.

But it'll take a while.

Madyx: 100%. And I think, you know, it's really interesting exploring AI. I think what we've been talking about is not the type of analysis that I've been reading about it. And it makes me think, like, why don't we think about it? We talked Game B, and we talked about hyper collaboration is one of their, like, principles of that alternative system.

And it's like, obviously, Game A forces displacement. You know, you just talked about that. But, like, how do you think, like, what, how do we think about people and other types of intelligence and maximizing collaboration, you know, and instead of That's a game be it way of thinking about it, right? How do you create systems where instead of being concerned about Displacement we're like, oh what what cool opportunities are there now for hyper collaborative hyper [00:27:00] collaboration between human and other types of Intelligences and that's what I was trying to get at a little bit of when I was saying For now, if the, the, how AI generates things is mostly iterative on what it's fed, then it would seem like humans, if they can, would seem like the most useful and secure, depending on how you want to look at it.

The contribution that a human can make would be to add novelty. And then, the Silicon, the digital intelligence. If it's incredible iterating, then you want a human feeding in novelty, and you want the digital iterating, organizing, doing that type of, uh, yeah, producing it, sharing, like, distributing it, and then you just want creative, and then you want creativity and novelty fed into that system so that it stays fresh.

You know what I mean? And so it's like, that's what I was thinking [00:28:00] about, you know, you've been working with AI agents, we've been talking about how can we, how can we have dialogue, how can we have discourse, and then, right, generate useful stuff from that, and that's how I'm thinking of this, like, how can we, through our talks, and this time, generate novelty, and then how can we leverage AI to share that in a useful way that doesn't just add A lot of fluff out there, you know, and I think I initially was thinking really, like, I was thinking, I was looking at it the wrong way and thinking, Oh, look what it produces now, is it valuable, is it that interesting, or is it just really derivative, but it's just, that's the wrong question.

So how do

Rauriki: in my

Madyx: leverage it

Rauriki: I think of ai, it's got, as it is, it has, I've, as I've kind of exploring, as I've been exploring it and working with it, it's got input and output. And the inputs can be Yeah, like general everything. And that's where you get, um. Just repetitive outputs, [00:29:00] or it can be your own inputs, and so that's like, how can we give it whanake input, how can we create data and give it and create something meaningful for us.

And then it's got like a, another thing I, I went in is um, there's inputs that, uh, like, like whakapapa that's online, like, you know, but that's. That's held by a certain iwi, like there are some inputs that, uh, almost consent based inputs that shouldn't be, so if it's already accessing it with unconsented, or, um, yeah, access to that, maybe it's indigenous knowledge, then does your utilization of the AI mean you're partaking in that exploitation of that knowledge?

So then, those are the input Parameters or barriers, things to consider, and then the output is okay. So is this output going to be contributing positively to society or just going to be improving shareholder wealth? Or is it going to be just 500 people going to lose their jobs [00:30:00] at this place because of this?

Um, and so that's that, that's that balance of, um,

Madyx: yeah,

Rauriki: yeah, looking at AI as a tool and then looking at its inputs and its outputs within the social context. And then even the. Even the tool itself is uh, is capital and so whoever gets to make the decisions of input and output is Who gets to make the decisions on input and output is whoever holds the capital.

And um, I think I, I saw a video pop up and it was of the founder of, I don't know if it's the founder or CEO, Sam Altman. Um, OpenAI, he said, we always thought it was about whoever has the algorithm, and whoever's to, They make the best use out of the input, make output data, but it's actually the compute, the computational power, which is like who's just got the most money to have the most grunt to just smash out all everything and say it comes back to capital.

Power discussion [00:31:00] again.

Madyx: It's so interesting how that's now the limiting factor, is just compute power, and it's just like, how much compute power can we, can we harness, and it's almost like, it's almost like, I don't know, back to maybe an earlier phase in the Industrial Revolution or something, when it's like, I, it's just like, we don't have enough raw energy,

Rauriki: we've talked about AI a lot in this session and kind of its role in our operations, uh, both as AI avatars. And us as carbon avatars, carbon based, silicon based avatars.

Um, and yeah, just figuring out where, what is, what is the role of AI in the game be, in the whanake future.

Madyx: Yeah, um, I don't know, I guess for me, my like, takeaway or closing thoughts on what we discussed today, it's just around, I [00:32:00] think it's really important to just try things and be open and to And to allow yourself to go back to first principles and just like, well what if we try it like this, what if we try it like this, and it's really easy to start down the assumption chain a bunch, and I think it's like, it's, I think it's really useful to have people trying different things, being open and honest about how it went, and sharing that, and iterating on that, and using AI, and anon, somewhat anonymizing the people, there's just different ways of doing it that we're not evangelists for that, we're just like, let's try this, we want to try this, we think it could be useful, and we'll be open and transparent about how it goes.

I think that's a lot of what we talked about today, is like, that's my takeaway, is people sometimes form a really emotive, strong response, and it's like, let's just try things report back. Maybe, and I think if more of us [00:33:00] could spend more of our time with

Rauriki: And I think just to add,

Madyx: would make progress a lot quicker.

Rauriki: everything's set up so you can't, you know, you don't have the, you have the rare opportunity, if you have the privilege either to, or you have been given the chance, like our, our supporters, our funders have given us this opportunity to, to explore this. If you don't have that, then you can't try anything new.

So I think with that, with great power comes great responsibility, to quote Spiderman. Well, I think it was Spider Man's auntie, uh, the comics. Um, but yeah, so, so it's, I think that's us one, uh, asking ourselves, why not? We've got, we've got the chance to, we have the, the opportunity to say, let's try it. And if it, if it does work, then it can be something we can communicate.

And if it doesn't, then, um, at least we've knocked on it and, and tried it. And then we can iterate off that.

Other episodes

Related blog

No items found.

Latest updates in your mailbox

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
🎉 21.201 active subscribers
cancel

Search podcasts, blog posts, people