Machines might know how to stop machines from turning us into machines. I asked ChatGPT a few questions aimed at finding ideas to reduce propaganda. The results aren’t particularly advanced, not in the way that we hope sophisticated new technology is also smart and wise, but the quick organization of issues is far greater than common search.
Text piles up fast during machine sessions, too many words to post in a blog, but here’s a taste.
I wrote:
“Measurable improvement in media literacy and critical thinking skills is a broadly complex and expensive challenge that policymakers widely address over generations. Meanwhile societies are currently pummeled by massive content distribution designed to increase disinformation and disrupt normal cohesion, therefore the time available to effectively remove intentionally harmful influence is comparatively short and requires thoughtful priorities, but what’s currently implemented or under review?”
ChatGPT replied:
You are correct that improving media literacy and critical thinking skills is a complex and long-term challenge that requires sustained investment and effort. However, there are some actions that can be taken in the short term to address the immediate problem of harmful content.
One approach is to increase the transparency and accountability of content creators and distributors. This can be done through policies and regulations that require clear labeling of sponsored content, disclosure of conflicts of interest, and penalties for deliberate dissemination of false information. Social media platforms and search engines can also be incentivized to promote high-quality and accurate content, while downgrading or removing low-quality or misleading content.
Another approach is to increase public awareness and education around the issue of disinformation and propaganda. This can be done through public campaigns and media outreach that explain the nature and consequences of disinformation, as well as provide tools and resources for individuals to identify and combat false information. This includes providing guidance on how to fact-check information, evaluate sources, and identify bias and propaganda techniques.
Finally, there is a growing focus on building collaborative networks and partnerships between government, civil society, and the private sector to address the problem of disinformation. This includes initiatives to share information, coordinate responses, and develop joint strategies for countering harmful content.
Fighting today’s barrage of ballot-bending psychological content requires strong immediate action. I had tried to bracket a short time period during this session, hoping for approaches to act immediately, but this and several previous answers repeated methods that require decades and huge budgets. ChatGPT and other robot playgrounds may ignore as much as they deliver, what’s not common or data that’s low quantity, but I’m wildly guessing because I have no idea what’s inside these engines.
As Dave Troy says, “If most of the internet is nonsense, the output of most AI will be nonsense.”
I asked ChatGPT about quickly reducing propaganda and quashing psychometric weapons. Ideas appeared in seconds:
—clear labeling
—conflict disclosure
—accountable content
—penalties for deliberate lies
partial session screen capture: