As the globalist elite's push toward the Age of Transhumanism intensifies, our discussion sheds light on the evolving relationship between AI and humanity.
Fear mongering is the biggest part of the whole psyop. If you mix up real and fake fears, most will not know the difference:
Fear viruses;
Fear enforcement measures;
Fear other people;
Fear vaccines;
Fear directed energy weapons;
Fear aliens;
Fear 'AI', etc, etc
- some are real, some aren't.
And it happens on both sides - how hard would it be for the psychos to infiltrate the alternative media?
Most people seem to be too far gone to be woken up by reason - addicted as they are to the lying media.
I don't believe so-called AI is any 'smarter' than the humans that program it - a form of 'projection' to use the psychologists' term. ChatGPT has already shown to be simply a sophisticated mirror of its controllers' ideas. Neuroscience has shown that emotions are fundamental to human intelligence - not something that 'AI' is known for.
The psychos appear to be jealous of normal people's creative abilities - all they ever do is borrow concepts like 'transhumanism' from science fiction, 'The Matrix', 'Terminator', 'Minority Report' etc. Or use '1984' as some sort of instruction manual. Artificial Imitation might be a more accurate term.
Maybe there's a correlation between lack of empathy and lack of originality?
I think you nailed it when you said, I don't believe so-called AI is any 'smarter' than the humans that program it. It is the same thing as saying AI is the human or team of humans that programmed it. Jeff and Karen go on about this as though AI is another lifeform if we agree that it is and allow it to function on its own. Then no one person or company or conglomerate of companies and participating professional research teams, who developed the programs that destroy humanity are at fault. We can all agree that the sky is pink and orange if we want to. But it won't make it any less blue and grey.
AI development is a legal consideration of potentially inflicting known or possible harm and not merely an ethical one of hoping we don't screw up. Legally, any harm now or in the future is perpetrated by code developers and managers of any AI creation perpetrating harm.
Agreed. We've sat back and let the world be effectively run by what used to be called 'technological determinism' for far too long. I'm not against technology as such - in fact I grew up reading science fiction at the end of it's so-called 'golden age'. One thing I got from that reading however, was it's strong philosophical content - many of the best writers were just as concerned with how proposed technologies or concepts would impact humanity as they were with developing the narrative. Sadly, this type of thinking is gradually being lost as we adopt much technology for it's own sake, with many seeing it for it's personal enrichment possibilities exclusively.
I could go on about how the 'system' promotes psychopaths to the top of the food chain etc, but I'll leave it there.
And that's the problem. It's no longer 'technological determinism'. It is psychopathic determinism. Right now these psychopathic personalities are hell-bent on replacing humans with AI and using humans to feed AI.
Karen and Jeff. Get a grip. We have a legal foundation to build on. This is eye-opening reporting. Everyone should know this stuff right now. But AI will never be a lifeform on its own simply because we say it is. It has no prime directive as an entity, other than the one that a person(s) programs into it. What appears legally certain is that AI is the result of human activity. And if it develops with the programming goal of replacing humans, without limit, and without remuneration for losses or damages, that is a crime. right now. If or when any AI programs are allowed to develop programs beyond the point of human analysis relative to the learning and capability of the "self-aware" machinery it will slowly run out of things to replace. It will then need to replace itself. If it replaces everyone, what will it do? Who cares? We would appear to have a legal basis for stopping a lot of this now if everyone was listening to this podcast.
AI uses megawatts of power, right? Show me the circuit breaker, and I'll show you an AI that's no longer a threat to humanity.. my point being, that AI is only a threat to humanity if we let it be. (Enter the Deep State..)
Fear mongering is the biggest part of the whole psyop. If you mix up real and fake fears, most will not know the difference:
Fear viruses;
Fear enforcement measures;
Fear other people;
Fear vaccines;
Fear directed energy weapons;
Fear aliens;
Fear 'AI', etc, etc
- some are real, some aren't.
And it happens on both sides - how hard would it be for the psychos to infiltrate the alternative media?
Most people seem to be too far gone to be woken up by reason - addicted as they are to the lying media.
I don't believe so-called AI is any 'smarter' than the humans that program it - a form of 'projection' to use the psychologists' term. ChatGPT has already shown to be simply a sophisticated mirror of its controllers' ideas. Neuroscience has shown that emotions are fundamental to human intelligence - not something that 'AI' is known for.
The psychos appear to be jealous of normal people's creative abilities - all they ever do is borrow concepts like 'transhumanism' from science fiction, 'The Matrix', 'Terminator', 'Minority Report' etc. Or use '1984' as some sort of instruction manual. Artificial Imitation might be a more accurate term.
Maybe there's a correlation between lack of empathy and lack of originality?
I think you nailed it when you said, I don't believe so-called AI is any 'smarter' than the humans that program it. It is the same thing as saying AI is the human or team of humans that programmed it. Jeff and Karen go on about this as though AI is another lifeform if we agree that it is and allow it to function on its own. Then no one person or company or conglomerate of companies and participating professional research teams, who developed the programs that destroy humanity are at fault. We can all agree that the sky is pink and orange if we want to. But it won't make it any less blue and grey.
AI development is a legal consideration of potentially inflicting known or possible harm and not merely an ethical one of hoping we don't screw up. Legally, any harm now or in the future is perpetrated by code developers and managers of any AI creation perpetrating harm.
Agreed. We've sat back and let the world be effectively run by what used to be called 'technological determinism' for far too long. I'm not against technology as such - in fact I grew up reading science fiction at the end of it's so-called 'golden age'. One thing I got from that reading however, was it's strong philosophical content - many of the best writers were just as concerned with how proposed technologies or concepts would impact humanity as they were with developing the narrative. Sadly, this type of thinking is gradually being lost as we adopt much technology for it's own sake, with many seeing it for it's personal enrichment possibilities exclusively.
I could go on about how the 'system' promotes psychopaths to the top of the food chain etc, but I'll leave it there.
And that's the problem. It's no longer 'technological determinism'. It is psychopathic determinism. Right now these psychopathic personalities are hell-bent on replacing humans with AI and using humans to feed AI.
Straight out of 'The Matrix'!
Karen and Jeff. Get a grip. We have a legal foundation to build on. This is eye-opening reporting. Everyone should know this stuff right now. But AI will never be a lifeform on its own simply because we say it is. It has no prime directive as an entity, other than the one that a person(s) programs into it. What appears legally certain is that AI is the result of human activity. And if it develops with the programming goal of replacing humans, without limit, and without remuneration for losses or damages, that is a crime. right now. If or when any AI programs are allowed to develop programs beyond the point of human analysis relative to the learning and capability of the "self-aware" machinery it will slowly run out of things to replace. It will then need to replace itself. If it replaces everyone, what will it do? Who cares? We would appear to have a legal basis for stopping a lot of this now if everyone was listening to this podcast.
It seems that whatever we do to fight back they have a counter attack.
AI uses megawatts of power, right? Show me the circuit breaker, and I'll show you an AI that's no longer a threat to humanity.. my point being, that AI is only a threat to humanity if we let it be. (Enter the Deep State..)