What is your biggest fear when it comes to AI?

What is your biggest fear when it comes to AI?

What do you think?

12 Points
Upvote Downvote


Leave a Reply
  1. Using DeepFakes to slander the real people they’re mimicking. It’s already happened to a school principal in the US. Someone made an AI version of him saying nasty insults, and people thought it was the real principal, and he was blackmailed for it.

  2. Original work getting disregarded as AI generated, people not getting credit for what they have created themselves.

    My friend turned in an essay the other day and she got in deep shit at the university because they thought she used the AI tool to write it for her. She wrote it and edited it herself over the span of a few months only to get it thrown out. They want her to write another one from scratch.

  3. it could be them taking our jobs over or ruling us but for me it would be replacement of animals and people…instead of your own flesh and blood child one that you order to the specifics you like. That would be my biggest fear.

  4. That I get catfhished by an AI. Like what if it progress? How do I have sex with em? Put my pp in the computer? What if I get a virus in the process? I’m no stephen hawking. I won’t be able to handle that

  5. Can’t really say I have one, pretty happy with how it’s being used so far but if I had to assign a fear it would be AI soldiers, like being able to print out a battalion of droids, drop them in some country and let them just go crazy

  6. Misinformation overload. We’re already near critical mass and that’s just with simple bot and troll farms. Once we have an AI capable of running those farms on it’s own and adapting better techniques to funnel people down the pipelines the person employing the AI chooses there will be very few people, myself likely not among them, who won’t even realize they’re being fed lies.

  7. false negatives, ending up on jail

    today is happening with social media, bad combination of words or letter (or even numbers, say you born before 1989 and use the last 2).. some platforms already banning people because of that

    false evidence fabrication with deepfake tech

  8. Human complacency.

    Basically, I can easily see humans letting AI do more and more for us for the sake of convenience, and then one day out of the blue, we’ve given up *just* too much control and AI won’t let us back in…and then decides to just get rid of us altogether.

    It’s like the Elon Musk quote: “If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it…It’s just like, if we’re building a road and an anthill just happens to be in the way, we don’t hate ants, we’re just building a road.”

  9. I hope I don’t start something awful by saying this, but if someone were to take a bunch of really messed up pictures, illegal ones included, and taught an AI to make terrifying artwork, then make a test to see what scares you, and show you just that.

  10. Fake AI “users” populating social media platforms to manipulate the real human userbase.

    It is already happening here on reddit itself tho currently at a very basic level. Bots are using chatGPT like programs to form comments to farm karma and easily get away with it. The comments themselves are obvious bot like but normal users don’t know and just blindly upvote. In the future when even free open source chatbots become indistinguishable from real humans we’re definitely going to get many bad actors using them to create services for big govts, corps, groups to push agendas.

    Won’t be surprised if new social media platforms pop up claiming millions of users when in reality they’re all AI bots.
    There is zero incentive for social media platforms to ethically stop this. Reddit itself does everything to make the creation of bot accounts easier as bots increase their site metrics so much.
    Reddit now refrains from blocking obvious spam/bots with the reason that they aren’t 100% certain they aren’t a bot. In the future when chatbots become too human, it would give them even more reasons to not ban them.
    It’s going to be clusterfuck for sure.

  11. If an AI ever becomes sentient, it could develop an intelligence and reasoning that is totally alien to us. So we could ask it to do something positive, like solve world hunger, and instead of responding in a way that seems logical to us (developing new farming technologies, etc.) it could instead create a deadly virus that would wipe out large swaths of humanity, or stir up political conflicts leading to a devastating war. Not out of evil or hatred for humans or anything, but simply by doing what it believes is reasonable.

Leave a Reply