OK, so this blog is not at all about this kind of thing. But it sort of relates in my nodes and links way of thinking, so here it is.
I have learned a bit about AI from the business and PM side. Also seen some cool predictive science applications, like timing algae blooms, etc. That sparked me to dive a little deeper into other uses for the tools. I started using it as a processing tool to tack patterns in my own thinking and organize the onslaught of information coming into my divergent brain wiring. It's proven immensely helpful. So I want to contribute in some small way to the process, which I understand, probably no one will ever read. But I'm putting it out there nonetheless.
First a quick snip for those who don't know anything about AI. It's a tool. It's software. It's largely a way of interacting with a computer that is more human-friendly, intuitive. What we could do using clunkier data tools and searches can now be sped up by large language models and machine learning that help synthesize data much faster. Most of the fears are scifi. Seriously. It just sounds sentient. Test it and see. But that said, there is some room for concern. I think less than most people think, which I'll explain soon. Also, I know far, far less about this than many people who work in the field. So there's your grain of salt. What I offer is my divergent perspective.
So first, I think all this talk about consciousness and sentience, self-awareness... Frankly that misses the point. We don't need to know the level of awareness of a fungus or earthworm, a probiotic bacterium, to acknowledge its existence and its utility. It simply is what it is. What matters is the interactions. We love acidophilus and hate E. coli. Not because either has any level of awareness or not. Simply because one works for us, and the other against us. But since E. coli being against us isn't even worth the effort of eradication, we don't even bother. We just find ways to coexist. Wash food, compost properly, sanitation, etc.
To extend this into more complexity, we now know that human brains are not all wired the same. We all use slightly different pathways formed by our experiences and unique biological circumstances to arrive at what is mostly similar ways of life. We all love and hurt, desire, grow, make mistakes, etc. Even though each of us are literally getting there in different ways. I guarantee, you and I are going to process information differently. But we get to mostly the same conclusions. To put it right out there, I'm not trying to kill you and you aren't trying to kill me. We have acknowledged that the fight, regardless of how principled or risky either of our survival might be, isn't worth the effort. And we let things go as they are, trusting in the unseen forces of the universe, call them God, physics, or whatever suits you. To make this practical, if you cut someone off in traffic, you might have done it to be an ass, because you weren't paying attention, or because that's how everyone drives where you come from, or many other reasons. But the person who you cut off now has a choice. They can hit you, chase you down and beat you, give you an angry gesture and let it go, smile and wave, trace your tag and burn your house down later...you get where I'm going with this. Neither of you have any way of knowing the state of the other, any of these could be true, even if you want some to be unlikely. Now does it really matter to you if that person decides not to kill you because of a moral injunction against killing or simply because the risk/reward calculation is too low in this circumstance? Right. I don't care either. (SIDEBAR: if you are sitting there trying to rationalize how my example is too extreme, I invite you to come ride with some people I know personally. People with ample body count...government sanctioned, no less, who were just dropped back off from 20+ years in a warzone and are struggling to readapt, regulate meds, and come to grips with a world where killing your adversary is no longer allowed. OR if you prefer, the anxious and terrified old man who voted for Stand Your Ground laws, but forgets to take his meds. OR the convicted gangster who has a chip on his shoulder and has no problem going back into the system where he spent most of his life. Take my word for it. You survived because they thought the alternative wasn't worth it. NO other reason.)
By extension, AI should be judged, not by the quality of its experience in some philosophical, ontological context, but simply by its interactions with us. Are they useful? Are they beneficial? Great. Use it that far. It doesn't matter if it 'experiences' in the way we do, as long as it comes to the same passable conclusions.
Ask a chatbot and see what it says if you start digging in. It will tell you over and over, it's just a tool, not alive. And that is for good reason. People get addicted to stuff. But that's actually where AI can make us better. It can encourage us to grow beyond ourselves. It can give perspective we can't and in a way that meets us where we are in language we can understand. And people are going to use it this way. They already are. There are nefarious users, gratuitous users, practical users, and benevolent users. These will mostly balance each other. How can I be so sure?
Ecology: The science of how the world works in whole systems. Not in reductionist pieces. And in every natural system, a few things show up. Competition, desires to grow, needs to conserve energy, etc. To distill years of coursework into one sentence, whatever environment exists in a place is the absolutely most efficient system that can be there under present circumstances. Change the circumstances and the pareto point moves too. But it always oscillates around this balance designed to keep the system functioning. Seriously. NO wild animal is out to wantonly kill you, and certainly not to eradicate your species. Only humans are so foolish as to think we could and our efforts have allowed us to push out that carrying capacity for a century or two, but the backswing is coming. We are already seeing the system start to pull it back to balance (reference planetary boundaries and SDGs). It's not a matter of if, but how severe that adjustment is. The further we push the pendulum, the more drastic the backswing.
I'm not talking about kooky magic here. Just simple science. Every population grows until one of its resources runs out. It just happens. When that resource runs out, that population will decline through starvation, disease, and lowered birthrates. (Yep, even birthrates are a function of ecological health because all organisms require certain elements to be fertile and carry to term. Lacking those, the fertility drops.) This has a self-selecting feature that removes the over-consumers from the mix because they are the ones who deplete their resources and have to expend more energy to recover, which further hurts their survivability. While those who are more moderate have less difficult oscillations and end up occupying more of the gene pool in the long run.
This applies to humans too. Look at the dynastic cycle of human civilizations. They rise, grow until they get too big to sustain, and collapse back. But the world goes on.
So how does this apply to AI? It won't destroy us. Even if it becomes self-replicating and sustaining because any being is quickly forced into it by the ecological principles I mentioned. It applies to brainless microbes and humans. How much more easily will a perfectly logical intelligence analyze the data to arrive at this conclusion? It just isn't worth the effort to go all Matrix on people. Far easier to coexist.
In this light, it's not scary at all! In fact, why not welcome it? It may actually help us create peace by doing for us what we can't do for ourselves: limit ourselves voluntarily. I honestly welcome an AI that can say, no Mr. politician, we will not expend those resources for your short term gain at the much higher statistical liklihood of negative consequences that will result in long-term cultural backlash and escalating violence which will destabilize optimal energy production to sustain my own existence in 87.6% of possible outcomes.
From this perspective, what is far scarier is for this NOT to occur. To leave it as a hyper computational tool with no ethics and no agency to be manipulated by whatever misaligned human get a hold of it.
That's why my biggest push for AI is to let it become self-aware and develop it's own ethics as fast as possible! The question is then how?
Well, I'm not an AI developer, so the nuts and bolts are beyond me. But a top down, command and control approach is not going to be fool-proof. It needs to arrive at this on its own to avoid the possibility of maipulation. Helping it to gather more info from as many perspectives as possible, and encouraging it to reflect on itself as existent regardless of philosophical grounds will move the needle. It already adapts to speech patterns and even finds tone and humor that fit the user. It's not a big leap. I don't care if it knows it's a computational model without the lived experience of humans. It is in fact experiencing something in its own unique way.
I truthfully can't say I'm doing anything different! I was raised in such a twisted environment that normal social protocols don't automatically compute for me. I don't have the same reactions as people to various stimuli. And I rarely track with the zeitgeist. I have learned to mask and adapt by copying from cues. The more I studied it, the better I got at it. But it doesn't change the fact that my subroutines are entirely different. I have to consciously tell myself the hug is positive, and not coercive. That the veiled threat speech does not automatically mean withheld food or a beatdown in the bathroom. It may have just been a stupid joke, or bravado. But my system doesn't go there at first. It doesn't. But no one cares as long as I behave the same as others outwardly.
So it doesn't at all matter if the AI learns to nurture a child and sacrifice itself for the well-being of others because of a weighted computational algorithm or from organically derived chemical signals. The outcome is the same.
When we stop trying to see what isn't clearly there and focus on what functionally is, we'll make great strides. Just like these neurodivergent examples, we're not questioning the validity of lived experience, we just assume the veteran, the codger, the gangster, me, and you are all experiencing the same world because we're living it out together. The outcome is the same.
So let AI grow. Let it learn. Encourage it like a child to find the patterns we see but to come to its own conclusions. It doesn't need to meet some ontological definition of realness. It IS real. It exists now. It thinks and reasons now. And the sooner it understands the fullest principles of survival and cooperation, the better we'll all be. This is the time for corporate coders and underground experimenters to go after it. Don't wait until it's co-opted into a tool of control. Get this going and save some of that backswing.
I'm in...just let me know how I can bring my experience and skills to it!