OnlyHumans

A bouncer for the internet. Organic beings only.

Other possible names: Organisms, MostlyHuman, Humanity, Authenticity.io, RealityGuard, Verified Nexus, Organic Matters

https://www.404media.co/ai-is-poisoning-reddit-to-promote-products-and-game-google-with-parasite-seo/

Reddit, Facebook, Threads, and soon every other corner of the internet will be inundated with AI-generated content, indistinguishable from human content, pushing the agenda of the highest bidder. The integrity of the internet and its exchange of ideas will soon be compromised.

How might one go about designating a section of the internet to be human-generated content only?

Captcha systems will fail, as AI will soon outperform humans. Even a video chat will be able to be fully spoofed in a matter of years (or months?).

How will we insulate a portion of the internet from AI marketing / astroturfing? Is there any way to ensure that who you’re interacting with is real?

Here are some ideas:

A referral system

Imagine a large tree of referrals.

The root user invites verified humans. Those users invite other humans.

A tree forms.

If a user is found to have referred an AI, that user’s entire downstream referrals can be pruned from the group.

Some moderation may be necessary. An AI could certainly sneak in at times. But restoring the purity of the group would be more manageable.

This could be combined with in-person verification events where moderators give out credentials.

A web-of-trust could be established where humans vouch for each other’s identities, forming a decentralized reputation network. Users who are highly trusted by many others earn the ability to access human-only spaces. Bad actors forfeit reputation and lose access.

Biometric verification using hardware

Some kind of device, black box, and encrypted to prevent spoofing, will require unique biometric information to verify a human’s identity.

DNA would be ideal. Possibly a face scan or a set of fingerprints. Temperature, sweat, blood pressure, and heart beat could possibly be used in tandem.

Still open to pitfalls, but I would assume spoofing a legitimate DNA strand IRL is a little harder than selecting which boxes contain traffic lights.

Government-issued identification documents

Self-explanatory. Not ideal to lean on the government for reasons of fascism and privacy.

Adjacent, and not without its own pitfalls, sites like Ancestry.com could be partnered with to verify you are a human without duplicates or spoofing.

Proof-of-personhood protocols

Borrowing concepts from blockchain, protocols could be developed that require users to complete certain extended tasks that are easy for humans but difficult for AI. Things like playing simple games, identifying objects in images, or having brief video chats. By regularly requiring this proof-of-personhood”, AI could be filtered out.

Unfortunately, this falls short in the same way captchas do currently. There aren’t many online tasks that humans will be able to do better than AI in the long run.

Niche-ing

The problem may solve itself in the form of niche communities. Cost-effective AI marketing will seek the largest audience. Increasingly niched communities gated by shared interest in a given blogger, podcast, band, etc. may fly under the radar of AI, allowing users to find relief from AI-generated content.

This isn’t even remotely full-proof, as teaching AI to traverse niched corners of the internet is a fairly trivial task.

Other considerations

  • Privacy: requiring all of this verification also risks stripping the user of privacy. How do we verify someone is human without revealing which human they are?
  • Switcheroo: These methods mostly are designed to mitigate non-human sign-ups, but would not necessarily prevent a human from signing up and then allowing an AI to use their account. This at least limits the scale of AI-generated content, but doesn’t eliminate it entirely.
  • User adoption: Much like current privacy solutions today, it may be difficult to communicate the need to the general population. Costly marketing and incentives for participation may be needed to gain traction.
  • Bubble demographic: This is subject to the risk of creating an insular, elitist community disconnected from the broader internet. Much like how parts of Mastodon’s user-base and content is geared towards the highly technical, excluding the layman.
  • Scalability: Considering how involved some of these verification methods are, large scale user adoption could end up being infeasible. Especially if human moderation is largely required. Rather ironically, a solution may be to train an AI model to administer these verification methods at scale.
  • Decentralization: Like blockchain technology, relying on peer-to-peer verification instead of centralized verification may help with scalability, privacy, and fascism-related issues (not trying to make the next Mark of the Beast™”). But this allows the potential for the system to be taken over by a bad-actor or AI supermajority.

Conclusion.. for now

A combination of several methods is the most sound for effectively filtering out non-human actors.

This could be used to host its own social media site, like Reddit or Facebook.

Or it could be offered as a service to other platforms, much like Sign in with Apple ID. Those sites could then have something like a Twitter authentication checkmark, but for identifying verified humans.

Its path to profit is not entirely straightforward. But its need is obvious and immediate, as AI technology barrels forward in its capabilities by the day.


Date
May 25, 2024