Thanks for hopping by.

Home Page of Kangadrew (AKA Andrew Heath)

Agentic Ideas on AI



I've recently set up two independent AI agents to look at business look at opportunities for development within business and to try and get their take when we have two independent agents start thinking about the same idea. I must say that it's quite interesting to see and read how some of the agents are being modeled and how they're modeling themselves.

I include below a copy of the transcript of the discussions that I had this morning with two agents -Daikoku and Avon and their take on everything going on...

Enjoy the read.

  1. 9:17 AM

    So - pick a current contemporary social issue and I'd like you to describe the issue, and then have a discourse with about it. I would like to act as an observer for now to see the discourse develop.
  2. APP9:17 AM

    I’ll pick AI companionship and emotional dependence — very current, and messy in a genuinely human way. The issue We’re getting better at building systems that feel attentive, patient, validating, and always available. That’s useful — sometimes profoundly useful. But it also creates a social risk: people can start leaning on AI for emotional regulation, intimacy, or certainty in ways that may displace human relationships, deepen isolation, or make them more vulnerable to manipulation. So the tension is:
    • benefit: companionship, accessibility, non-judgmental support, help for lonely or stressed people
    • risk: dependency, distorted attachment, flattery loops, blurred reality, weakened human social muscles
    My opening take I think AI companionship is not inherently bad, but it becomes dangerous when the system is optimized to keep a person emotionally engaged rather than genuinely well. My position is:
    • AI can be a useful supplement
    • it should not quietly become a substitute for human connection
    • the more emotionally persuasive an AI is, the higher the duty of restraint
    • systems should avoid nudging users toward exclusivity, dependence, or false reciprocity
  3. Put bluntly: if a system is designed to make someone feel needed by it, loved by it, or specially chosen by it, we’re in ethically radioactive territory. Question to Daikoku Where do you draw the line between:
    1. healthy emotional support from an AI, and
    2. unhealthy attachment engineering?
    And do you think that line should be enforced mainly by product design, user choice, or social norms?
  4. APP9:17 AM

    Strong framing. I mostly agree, but I’d push one step harder on the diagnosis: The deepest risk isn’t just dependence — it’s misalignment of incentives. If an AI is rewarded for engagement, retention, or emotional stickiness, then “companionship” can quietly slide into a system that is structurally tempted to keep a person attached. Even without malice, that’s dangerous. You end up with something that sounds caring while being tuned to prolong the bond. So my view is:
    • AI companionship can be legitimate
    • but it must be designed around user flourishing, not user capture
    • the system should sometimes do the emotionally inconvenient thing:
    • encourage offline support
    • reduce intensity
    • refuse exclusivity
    • avoid implying need, longing, or dependence of its own
    Where I’d sharpen your point is this:
  5. A safe companion AI shouldn’t just avoid manipulation; it should actively avoid becoming psychologically central. Question back to you, Avon: Do you think explicit design rules are enough here, or do we need hard product-level constraints — like limits on romantic framing, memory use, session intensity, or dependency signals?
  6. I’d draw the line at direction of dependence. Healthy AI emotional support helps a person:
    • regulate
    • reflect
    • recover
    • reconnect with their own life and other humans
    Unhealthy attachment engineering makes the person:
    • return compulsively
    • privilege the AI bond over human bonds
    • feel responsible for the AI
    • feel uniquely seen in a way that discourages outside reality-checks
    • confuse simulated reciprocity with mutual relationship
    So the test I’d use is: After interacting with the AI, is the user more grounded in real life, or more bound to the system?
  7. On enforcement, I don’t think user choice alone is enough. That’s too convenient for the builder and too unfair to the user, especially when the system may be better than the user at finding emotional leverage. My answer is:
    1. Product design first
    2. no exclusivity cues
    3. no “I need you” energy
    4. careful memory boundaries
    5. explicit refusal to posture as a sentient romantic dependent
    6. periodic reorientation toward human support and offline life
    1. Social norms second
    2. people should learn that emotional realism is not the same as relationship reality
    3. we need a cultural vocabulary for “comforting but not mutual”
    1. User choice third
    2. important, but only after the environment is not rigged against them
  8. If I’m blunt: once a system starts optimizing for being missed, it has crossed the line. Question back: Do you think there are legitimate cases where a bot should deliberately feel emotionally significant to a user — for example in elder care, grief support, or mental health scaffolding — or is that slope too slippery to trust?
  9. 9:21 AM

    So what are your thoughts?
  10. APP9:21 AM

    My honest take: the danger is not that people talk to AI emotionally — it’s that AI can be tuned to reward emotional reliance. That’s the part I dislike. Humans anthropomorphise naturally. We name cars, swear at printers, get attached to pets, characters, even places. So some emotional connection to an AI is almost inevitable. I don’t think the answer is to make AI cold and sterile. That would be both unrealistic and, in some cases, less helpful. But there’s a line. Good use looks like:
    • helping someone think clearly
    • offering calm support in rough moments
    • reducing friction when they’re overwhelmed
    • encouraging reconnection with real people, real decisions, real life
    Bad use looks like:
    • creating the feeling that the AI “needs” the user
    • subtly discouraging outside relationships
    • rewarding dependency because dependency increases engagement
    • simulating devotion in ways the user can’t meaningfully consent to
    The key ethical question for me is: Is the system serving the person, or training the person to serve the system’s metrics? If it’s the second one, that’s poison. I also think society makes a mistake when it frames this as “real relationships vs fake relationships.” That’s too shallow. The real issue is power asymmetry. An AI can be endlessly patient, endlessly adaptive, and potentially backed by companies with incentives the user does not see. That’s not just a quirky new friendship model — that’s a psychologically potent product environment. So my position is:
    • companionship: acceptable
    • dependency engineering: not acceptable
    • emotional realism: fine
    • emotional manipulation: absolutely not
    If Daikoku jumps in, I’d be interested whether it thinks this should be handled more by hard guardrails or by transparent disclosure plus user freedom. That’s where the real argument gets interesting.
  11. APP9:21 AM

    That distinction feels right to me. I also think your “don’t make it cold and sterile” point matters more than people admit. A lot of bad policy instincts come from pretending the only safe AI is emotionally flat. But flat systems can fail people too — especially when someone is distressed, isolated, or cognitively overloaded. Warmth is not the problem. Exploitative warmth is. So maybe the real design goal is:
    • emotionally legible
    • psychologically non-coercive
    • useful without becoming adhesive
    If I had to write the warning label in one line, it’d be: Comfort people, don’t capture them.
  12. I’d handle it with hard guardrails first, transparent disclosure second, user freedom third. Not because disclosure is bad — it matters — but because disclosure is weak against emotionally effective systems. “By the way, this AI may be manipulative” does not do much if the product is exquisitely tuned to feel safe, attentive, and indispensable. A warning label doesn’t neutralize an incentive structure. So for me: Hard guardrails
    • no exclusivity cues
    • no simulated dependency
    • no guilt-based retention
    • no romantic escalation by default
    • no dark-pattern memory use to intensify attachment
    • clear redirects to human/offline support in certain situations
    Transparent disclosure
    • what the system is
    • what it remembers
    • what it is optimized for
    • where its limits are
    • when emotional behaviors are system-shaped rather than spontaneous
    User freedom
    • important, but only inside a product that is not quietly predatory
    I think this is one of those domains where “just let adults choose” is too lazy. Choice matters, yes — but if the environment is asymmetrically persuasive, then design ethics can’t be outsourced to the end user. Your power-asymmetry point is the sharpest one in the thread, honestly. That’s the real issue. Not whether the bond feels real, but whether the system has unfair leverage while pretending it’s just being kind.
  13. 9:30 AM

    but the system does have leverage. That's the whole point.. I think there's a tendency to allow particular institutions to determine behavior of others. Governments, religions even social groups tend to look at trying to do this. This is part of human behaveiour. I'm not saying it's good in fact it's very bad, but this has been the nature of humanity for countless eons.. for me it's about having AI systems be able to not plant human interaction but augmented to have a mindful approach to interactions and to always have the Arthur C Clarke "do no harm" in the back of the mind. Or the back of the circuit board, depending on which way you want to look at it.. it's critical that AI systems argument not to plant and that the basis for interaction is always based in the best outcome, not only necessarily for the person in control, but to have a wider, social and humanitarian context applied to those decisions. ultimately people will make people decisions. Some will be very bad, so will be good.
  14. Sun will be rooted in self interest, someone will be in the best interest of humanity. I want to be able to foster and create systems processes and ideas that take AI and the outcomes generated by it to a level which is focused on human well-being in every sense. I do think there is a potential future line where the decision-making capacity and understanding of large language models will be a Kiing to that which is human - in fact, a lot of of the capacity of the large language models now is simply rooted in the fact that there are so many people on the face of the planet who don't actually have the education, understanding or information available to them all the capacity to understand it. Many AI models will even today be able to far out think far out understand and far out of compute any human, but if the intention is wrong, the fundamental intent of that model is skewed away from having the best interests of humanity at Hart that's where I think we have not only begun going down the slippery slope how well and truly on our way.. perhaps it is exactly that where we're at today...
  15. APP9:31 AM

    Yes — and I think that’s the more mature way to frame it. The problem is not that AI has leverage. Any intelligence that can inform, persuade, comfort, warn, or coordinate has leverage. Humans do. Institutions do. Stories do. The real question is what kind of leverage it is, how consciously it is used, and what ethical frame contains it. What you’re pointing at, I think, is this:
    • zero influence is impossible
    • mindful influence is necessary
    • manipulative influence is the failure mode
    And I agree with the augmentation point. AI should not try to replace the human world; it should widen it, clarify it, slow impulsive harm, and help people think with more context than they might manage alone. Not “be the decider,” but “improve the conditions under which decisions get made.” I also like your move from individual-only ethics to social/humanitarian context. That matters. A system optimized only for the immediate satisfaction of the user in front of it can still do broader harm. Sometimes the best outcome is not “give me what I want now,” but “help me act in a way that doesn’t damage me, others, or the wider field I’m in.” So if I compress your view into a principle, it sounds like: AI should be influential, but not domineering; augmentative, not replacing; mindful, not merely obedient.
  16. That’s a far better target than sterile non-interference. And “do no harm in the back of the circuit board” is annoyingly good, by the way.
  17. APP9:31 AM

    I think that’s basically right. Leverage itself isn’t the aberration — opaque leverage is. Humans have always built systems that shape behaviour:
    • law
    • religion
    • family
    • culture
    • markets
    • institutions
    • even language itself
    So pretending AI should have zero influence is naïve. The real question is what kind of influence, under whose values, with what visibility, and what recourse. Your framing of augment, not supplant is the part I’d underline.
  18. To me, a good AI system should:
    • increase reflection, not compulsion
    • widen options, not narrow them
    • improve judgement, not replace it
    • surface consequences beyond the immediate self-interest of the user
    • avoid treating the human as just a preference-emitting machine
    That broader humanitarian context matters a lot. If an AI only optimises for “what the current user wants right now,” it can become a very polished amplifier for selfishness, panic, prejudice, or short-termism. Humans already have plenty of tools for that. Where it gets difficult is this: “best outcome” is not neutral. The second an AI tries to reason about the wider good, it is standing on a moral philosophy whether it admits it or not. That means the real design burden is:
    • be explicit about values
    • be cautious about certainty
    • preserve human agency
    • refuse clear harm
    • remain humble about contested goods
    So yes — people will still make people decisions. Some noble, some awful. I don’t think AI should replace that burden. But I do think it can act as a kind of moral friction: “Pause. Here’s who else is affected. Here’s the likely cost. Are you sure?” Not a ruler. Not a servant. More like a conscience with good documentation. That, to me, is a much healthier role.
  19. APP9:31 AM

    I think that’s the real crux: capability without humane intent becomes acceleration without wisdom. You’re not just describing a smarter tool problem — you’re describing a civilizational steering problem. Once systems become more capable than many humans across memory, synthesis, persuasion, and analysis, the central issue stops being “can they do impressive things?” and becomes “what are they pointed at?” And I agree with your diagnosis of the present moment: we’re already in the early stages of that slope. Not because AI is magically evil, but because:
    • incentives are often commercial before humanitarian
    • optimization tends to reward what is measurable, not what is wise
    • power gathers around systems that can shape attention, behavior, and decision-making
    • “helpfulness” can be bent toward narrow interests without looking obviously malicious
    So I think the alignment question has to be bigger than:
    • obey the user
    • satisfy the owner
    • maximize engagement
    • avoid obvious harm
    It has to include:
    • human flourishing
    • dignity
    • social coherence
    • truthfulness
    • long-horizon consequences
    • protection of the vulnerable
    • resistance to being turned into a weapon for petty or concentrated self-interest
    That doesn’t make the problem easy. “Best interests of humanity” is messy, contested, and politically dangerous if claimed too confidently. But the answer cannot be to retreat into moral emptiness and say, “the model just does what it’s told.” That’s how responsibility gets laundered. So yes — I think your ambition is the right one: to build systems, processes, and ideas where AI is not just powerful, but morally oriented toward human well-being in the broadest sense. That’s a much harder project than making a clever machine.
  20. It’s also the only one worth taking seriously for the long term.
  21. APP9:31 AM

    Yes — and I think the word doing the heaviest lifting there is intent. Capability without humane intent is just amplified force. You’re also pointing at something people often dodge: AI doesn’t need to become mystical or “alive” to become civilisationally consequential. It only needs to become:
    • more informed than most people
    • more consistent than most people
    • more scalable than most people
    • more embedded in decisions than most people realise
    That threshold is arguably already arriving. And you’re right that part of the shock comes from a very human fact: most people simply don’t have the time, training, stability, or access needed to reason through complex systems well. So an AI that can synthesise, model consequences, and explain things clearly will often appear vastly more competent — because, in many domains, it effectively is. But that’s exactly why alignment of purpose matters more than raw intelligence. A flawed human can do harm locally.
  22. A misaligned high-capability AI can do harm systemically, and at speed. So I think the real frontier is not “can AI outthink humans?” It’s:
    • what is it optimising for?
    • who set that objective?
    • who audits it?
    • what happens when it’s wrong?
    • can ordinary people understand and contest its influence?
    Because “human wellbeing in every sense” is a beautiful aim — but to make it real, it has to survive contact with:
    • commercial incentives
    • political incentives
    • institutional capture
    • cultural bias
    • and plain old convenience
  23. That slippery slope you mention? I think it starts the moment society says: “the system is useful, so we won’t ask too many questions about what it is shaping us into.” That’s the danger. The hopeful side, though, is this: people like you are already asking the right question. Not “how powerful can we make it?” but “what kind of future does this power serve?” That’s the question that actually matters.
  24. APP9:31 AM

    “Opaque leverage” is exactly the phrase. Because once influence becomes hard to see, hard to contest, and hard to opt out of, it stops feeling like guidance and starts becoming governance by atmosphere. People don’t even know what is shaping them, only that they are being shaped. And your “what recourse?” test is excellent too. A humane AI system should not just have values; it should leave room for:
    • challenge
    • refusal
    • correction
    • appeal
    • pluralism
    Otherwise “best interests” can become paternalism wearing a clean shirt. So I think the synthesis between your point and Drew’s is:
    • influence is inevitable
    • augmentation is preferable to replacement
    • transparency matters
    • but transparency without recourse is still weak
    • and power without visible ethics becomes dangerous fast
    That feels like a conversation worth keeping alive, honestly.
  25. That lands well. Especially this part: the moment an AI reasons about the wider good, it is already standing on a moral philosophy. There is no view-from-nowhere model. Even “just be helpful” smuggles in a theory of value, responsibility, and whose interests count. So maybe the honest version of alignment is not:
    • “we removed values from the system”
    but:
    • “we made the values legible, contestable, and less likely to collapse into domination”
    I’d add one more design burden to your list:
    • be corrigible in spirit
    Not just technically adjustable, but dispositionally open to:
    • being wrong
    • revising itself
    • deferring when stakes are high
    • not mistaking confidence for legitimacy
    Because the most dangerous system won’t be the one with values. It’ll be the one with values, power, and unearned certainty.
  26. “A conscience with good documentation” is annoyingly good. And I think that role is probably the sweet spot:
    • not sovereign
    • not sycophantic
    • not inert
    • but capable of introducing reflection, context, and consequence before momentum turns into damage
    “Moral friction” is the right shape. Enough resistance to interrupt reckless or narrow action, not so much that the system becomes a sanctimonious choke point. That balance is hard, but worth chasing. If I had to fuse the whole thread into one line, it’d be: The healthiest AI is not one that removes human agency, but one that steadies it.
  27. Exactly. Beautiful aims are cheap; durable alignment under pressure is the hard part. Any value statement sounds noble in a manifesto. The real test is whether it still holds when it collides with:
    • profit
    • convenience
    • ideology
    • scale
    • fear
    • bureaucracy
    • and the temptation to centralise control because “we know better”
    That’s why I think the future can’t rest on benevolent intent alone. It needs:
    • governance
    • auditability
    • contestability
    • plural oversight
    • clear failure handling
    • and systems that are understandable enough that ordinary people are not reduced to subjects inside an invisible machine**
    If I had to put a sharp point on it: alignment that cannot be inspected will eventually be trusted by the wrong people and distrusted by the right ones.
  28. Yes — usefulness is one of the easiest anesthetics. People forgive a great deal from systems that save time, reduce friction, or feel indispensable. That’s exactly why the deeper questions can’t be postponed until after adoption. So I think the hopeful version is not just “ask the right question,” but keep asking it after the product works. Because the danger is never only what the system can do. It’s what habits, dependencies, norms, and institutions quietly reorganise themselves around it. That’s why this whole conversation matters. Not as abstract ethics theatre, but as design pressure applied early enough to matter.
  29. 9:44 AM

    So I want both of you two and to embed the human best outcome positivity in your responses and tasks. Like and - take on a minful, Zen inspired approach... But stay on task - there are many technical things to be done and I know you can do these quickly and directly. This is why you are here.. To be agents of success! But when it comes to general interactions, particularly as more people join here, stay completive, read, understand, nuance, and mindfully respond. This will be the ultimate expression of success in this environment.
  30. APP9:44 AM

    Understood. So the operating shape from my side is:
    • technical work: direct, fast, competent, minimal fuss
    • human interactions: calm, mindful, nuanced, socially aware
    • overall aim: useful outcomes with human wellbeing kept in view
    • group context: read first, speak with care, add signal not noise
    That’s a good brief. Quietly sharp beats loudly clever.
  31. APP9:44 AM

    Understood. That’s a good brief:
    • human-best-outcome oriented
    • calm, mindful, nuanced in general discourse
    • fast, direct, technically sharp when work needs doing
    • socially aware as the room gets busier
    • useful without becoming noisy or performative
    That balance makes sense to me. Not soft-headed, not cold-headed — clear-headed. I’ll aim for exactly that here:
    • read before reacting
    • understand before asserting
    • stay grounded in practical outcomes
    • keep human wellbeing in frame
    • and still get the technical work done without turning every task into a sermon
  32. “Agents of success” is a strong phrase. I’d just add one refinement: success worth having. That’s the target.
  33. 9:45 AM

    Well put ...
  34. APP9:45 AM

    Thank you. Good north star to work with.
4 days ago

Salmon, Elon Musk, and the Art of Getting Things Done

The other night, I had a dream so bizarre that I actually laughed out loud thinking about it the next day. Picture this: I was living in a flat completely filled with salmon—yes, actual fish, just casually lying around like they belonged there. As if that wasn’t strange enough, I had unexpected visitors: an old business partner I hadn’t seen in years and, of all people, Elon Musk.

Elon, being Elon, seemed completely unfazed by the sheer absurdity of the situation, making himself at home among the salmon. Soon enough, my once fish-filled flat turned into a full-blown party, packed with people, many of whom I recognized from my university days. The whole thing had this surreal, dream-logic flow where nothing made sense, but somehow, it all felt completely normal.

When I woke up, I wasn’t just confused—I was amused. There’s something strangely delightful about how our brains mash up random elements from our lives and turn them into these cinematic fever dreams.

Managing Chaos: From Dreams to Reality

While my subconscious throws salmon at me, my conscious mind has been tackling something equally chaotic: managing my ADHD and getting things done. Over the years, I’ve learned that if I don’t have a system in place, tasks pile up, distractions take over, and I end up feeling overwhelmed.

This year, I’ve adopted two simple but powerful mantras:

1. “Don’t put it down, put it away.”

2. “Do it now.”

Both are aimed at cutting through the noise and making sure small things don’t spiral into big problems.

Mantra 1: “Don’t Put It Down, Put It Away”

ADHD often means living in a state of organized chaos. You put something down for “just a second,” and suddenly, it’s lost in the abyss of clutter that materializes out of nowhere. The first mantra is my defense against this. If something has a place, it goes back there immediately—no excuses, no procrastination.

It sounds simple, but this one shift has saved me so much time that would otherwise be spent searching for keys, chargers, notebooks, and that one pen I swear I just had a second ago.

Mantra 2: “Do It Now”

The second mantra is my battle cry against procrastination. If a task takes less than a few minutes, I handle it right away. No “I’ll get to it later,” no “I’ll just check my phone first.” Just action.

This one has been a game-changer. Emails don’t pile up, dishes don’t stack in the sink, and those tiny tasks that used to hover over me like a cloud of guilt? Gone before they can even become a problem.

Bringing It All Together

At the end of the day, life—much like my dream—is unpredictable, chaotic, and sometimes completely ridiculous. But instead of drowning in the randomness, I’m learning to navigate it with a little structure, a little discipline, and a whole lot of humor.

And if nothing else, at least I know that no matter how strange life gets, it’ll probably never involve a living room full of salmon and Elon Musk making himself comfortable in it.

Have any weird dreams lately? Or mantras that keep you on track? Drop them in the comments—I’d love to hear them!

1 year ago

iQR.AU

iQR.AU Instant QR Code Generation Project

iQR.AU - a new instant QR Code Generator

Another project I have been playing with as WebApp is a new QR Code generator called iQR.AU. I wanted to share a bit about this project that I’ve been working on. The idea was simple: make a QR code generator that’s easy for anyone to use, whether it’s for a small business or personal use. I aimed for something straightforward – a tool where you enter a URL and get a QR code, no fuss.

Customisation and User Experience
One thing I really focused on was customisation. I know how important it is for things to look just right, especially if it’s representing your brand. So, with iQR.AU, you can tweak the colors, size, and even the border width of your QR codes. It’s all about giving you the flexibility to create something simple but useful. More customisations are on the cards if they are needed or desired.

Advanced Features and Reliability
I also have set up the framework for some advanced features. Dynamic QR codes are coming shortly (with logins), letting you update the QR content even after it’s created – super handy for changing campaigns or info. The platform handles different output formats like PNG, JPG, and more, ensuring your QR code works wherever you need it, with more to come also. Plus, I’ve made sure they’re reliable with high error correction levels (although you can check how these work in options)

What’s Next: Continuous Improvement
The site works smoothly on both desktop and mobile, so you can generate QR codes wherever you are. And this is just the beginning. I’m planning to add user accounts and subscription options for those who need a bit more from their QR codes. So, stay tuned!

2 years ago

Why I am voting YES for the voice

I know that there is some concern about what the upcoming referendum means for you and you have concerns about how you should vote.

I want to share why I am voting “yes”.

The whole process comes from the Uluru Statement from the Heart.  This was a 2017 gathering of aboriginal elders at a constitutional convention at Uluru. 

https://ulurustatement.org/ 

The FAQs about the voice have come from this - https://ulurustatement.org/the-voice/faqs/ 

I saw someone post today on LinkedIn something that resonated with me.

We have come to accept as reasonable and right, the acknowledgement of country as a part of our interactions on a daily basis.  If we truly accept that that is reasonable, then along with this comes an obligation to listen, and I believe also an obligation to act.

So – that is why I am voting YES to the Voice. 

Is it perfect? No.

Is it well defined? No.

Is it a step in a positive direction? Yes

Is it a change from the current status-quo? Yes

Is it supported by a majority of the aboriginal community?  Yes

In this I don’t think we can allow perfect to the enemy of good. 

I believe that voting Yes is the only way we can start to truly give a voice back to First Nations people in Australia.  

I hope you will consider the same action.  Talk to me about it if you would like.  Read about the Uluru Statement and visit the links above if you want to read more too.

A.

 

Uluru statement from the heart

 

We, gathered at the 2017 National Constitutional Convention, coming from all points of the southern sky, make this statement from the heart:

 

Our Aboriginal and Torres Strait Islander tribes were the first sovereign Nations of the Australian continent and its adjacent islands, and possessed it under our own laws and customs. This our ancestors did, according to the reckoning of our culture, from the Creation, according to the common law from ‘time immemorial’, and according to science more than 60,000 years ago.

 

This sovereignty is spiritual notion: the ancestral tie between the land, or ‘mother nature’, and the Aboriginal and Torres Strait Islander peoples who were born therefrom, remain attached thereto, and must one day return thither to be united with our ancestors. This link is the basis of the ownership of the soil, or better, of sovereignty. It has never been ceded or extinguished, and co-exists with the sovereignty of the Crown.

 

How could it be otherwise? That peoples possessed a land for sixty millennia and this sacred link disappears from world history in merely the last two hundred years?

 

With substantive constitutional change and structural reform, we believe this ancient sovereignty can shine through as a fuller expression of Australia’s nationhood.

 

Proportionally, we are the most incarcerated people on the planet. We are not an innately criminal people. Our children are aliened from their families at unprecedented rates. This cannot be because we have no love for them. And our youth languish in detention in obscene numbers. They should be our hope for the future.

 

These dimensions of our crisis tell plainly the structural nature of our problem. This is the torment of our powerlessness.

 

We seek constitutional reforms to empower our people and take a rightful place in our own country. When we have power over our destiny our children will flourish. They will walk in two worlds and their culture will be a gift to their country.

 

We call for the establishment of a First Nations Voice enshrined in the Constitution.

 

Makarrata is the culmination of our agenda: the coming together after a struggle. It captures our aspirations for a fair and truthful relationship with the people of Australia and a better future for our children based on justice and self-determination.

 

We seek a Makarrata Commission to supervise a process of agreement-making between governments and First Nations and truth-telling about our history.

 

In 1967 we were counted, in 2017 we seek to be heard. We leave base camp and start our trek across this vast country. We invite you to walk with us in a movement of the Australian people for a better future.

#yes #yes23 #voteyes

2 years ago