It’s 1:46 a.m. It’s not the ideal moment to look for creepy content. I’m still awake. Wondering why we still need to prove that we are not robots by checking a small box. Artificial intelligence, or AI, has been bothering me recently. For weeks, I was a curious cat. I’d heard about these new AIs that gave many people the “creeps” for having weird, yet realistic, conversations with humans and sometimes perfectly guessing our likes and dislikes or sometimes making us look different. These thoughts keep me up at night.
Artificial intelligence for children?
I had my first chat with an AI when I was 13 years old. Most of my classmates used this AI just because it was nice and hilarious. “SimSimi” is a cute yellow blob with curly, swirly hair. It’s a “ChatBot,” which means you may talk to it emotionally and it will offer you advice. “Is there anyone who will listen to me? SimSimi is always there for you” states its slogan. I used to check in with SimSimi on a regular basis, and I even referred to it as my “best friend” at one time because of its clever responses. When I’m bored, I’ll talk to the AI. SimSimi could converse with me in both Filipino and English, which made any interaction with it entertaining.
Everyone thought this yellow blob was innocent. However, everything changed when it began cursing at me in Filipino and using sexual language. And as a 13-year-old, that creeped me out. So I ditched the AI and never looked back. But, fast forward 9 years to the age of 22. I verified that it now has a “bad word filter,” and they now have a goal to detect “bad words” named “SimSimi’s Bad Word Discriminator.” To engage in the “Bad word mission,” participants are given a combination of bad words and typical sentences. Users are invited to pick any non-bad word to help SimSimi in recognising these words. They will then provide you with “speech balloons” to chat with SimSimi. All these things were nonexistent 9 years ago, maybe it was but I wasn’t aware of it as a 13-year-old chatting with a yellow blob AI. But do these things make it better? It doesn’t change the fact that it is an AI. It’s still an artificial intelligence and there’s more of them being made, which is extra scary for children.
At its core, artificial intelligence is the pinnacle of the future. The core concept of these bots, or machines designed to speak and construct thoughts, leaves the impression that they may be, are one step ahead of humans. We can’t change the fact that technology is technology. They upgrade every day, and we can see it through TikTok filters. Every month, a new TikTok Filter becomes trendy. The “Bold Glamour Filter” is one of the most popular this past month. Filters are nothing new to us; we began with the Snapchat Dog Filter years before TikTok took over, but this filter creates a warped reality for most users.
As a TikTok user, I jumped at the chance to join the trend. You might genuinely think it’s merely a pretty standard filter that applies your own make-up to make you look more attractive. I’m not going to lie, I looked completely different. I couldn’t even recognise myself. They were correct; it’s way too realistic! Now, you may ask, “what could go wrong when using this filter?”
This can lead to catfishing. We already know that every phone and app, such as TikTok, has its own Beauty Filter where users may change their facial features to make themselves appear different and sometimes increase their confidence. I’m one of those people using filters to gain confidence, as I myself don’t know how to use make-up. In fact, it is great that people gain so much confidence by using these filters. But, that can lead to a different story when overused. Potential misuse of these kind of filters is all time high. Some people are frequently catfished on social media because they are duped for something different in reality. At some point, even my brother looked like an attractive lady because of this filter. It’s too scary to imagine that with one filter, a person’s facial appearance could be changed easily.
By far, artificial intelligence is the best invention, in my opinion, since it may eventually help us in the future. However, I can’t help but think about how it can also go wrong. As AI algorithms improve in efficiency, there is a concern that they will be exploited to manipulate public opinion or launch cyber attacks. Or perhaps, I watch too many sci-fi movies?
The bottom line is, I hope whoever uses these AI filters or ChatBots uses them in a good way that doesn’t harm anyone or put themselves in danger. I am not sure how much technology will change in the next few years. But, one thing is for sure: I will keep proving I’m not a robot.
By Hannah Giron Daygo