A.I. Wayo Wayo 🤖: How E Take Easy To Fool Dem A.I.-Detection Tools❓
⬇️ Pidgin ⬇️ ⬇️ Black American Slang ⬇️ English
Times don reach wen you no fit sabi wetin na real again for dis our world 🌍, e don dey hard to know who na im wear Balenciaga 👔 and who no wear, who go moon 🌑 and who no go. Ontop sey computer 💻 dey form pishure wey dey resemble real thing, kasala don burst and e dey cause gbege for our society. We sef no know which one na real 📸 and which one na fake again.
AI GENERATED IMAGES
Plenty companies don show face dey help us know which pishure na real and which one na fake. Dey get better beta technology 🛠 wey dey look pishure well well, check am with computer to know which one human being snap, which one na computer form am. But na so some people wey sabi computer and tech matter dey fear say this A.I. thing go too much pass the tools wey we dey use check am. 🧐
To know whether these A.I.-detection tools strong reach, New York Times 🗞 con test five new services with over 100 pishure, some na A.I. form am, some na real pishure. Dem see say these services dey better small small, but e still get where e no pure.
Imagine pishure wey be like say na Elon Musk, dat billionaire, dey hug robot 🤖. One artist wey dey use A.I. form pishure come out am. E fine well well but e con confuse plenty A.I. pishure checkers. Dem ones wey dey collect money 💰 like Sensity, and the ones wey dey free like Umm-maybe’s A.I. Art Detector, all of dem no fit catch say na A.I. form am.😵💫
Dem dey look pishure come see sey e get as e be, sharpness and contrast come dey one kain. Dem see say na A.I. create the pishure. But dem no fit understand say Elon Musk no fit dey inside foto with robot. E come be like say na only the technology 💾 na im we dey depend on to catch fake pishure.
Many of these companies like Sensity, Hive and Inholo, wey get Illuminarty, no even gree say dem tool dey perfect. Dem say na so dem dey try make their systems better as A.I.-image formation dey improve. Hive con talk sey sometimes na because the images no too clear 🌫 na im make dem no fit catch am. Umm-maybe and Optic, wey get A.I. or Not, no even gree answer dem when dem call dem.
To do this test, New York Times 📰 collect A.I. images from artists and researchers wey sabi how to use generative tools like Midjourney, Stable Diffusion and DALL-E. Dem ones fit create realistic pishure of people, animals 🐅, nature 🌿, house 🏠, food 🍲 and plenty plenty other things. The real pishures na from The Times dem take am.
Many people dey see A.I. experts like Chenhao Tan, wey be assistant professor of computer science for University of Chicago, dey shake head. Dem say these tools no good and dem no believe sey dem go fit make am better.
Chenhao talk say, “For now, e fit possible say dem go fit work small, but for future, anything wey human being fit do with images, A.I. go fit recreate am and e go hard well well to know the difference.” 🔄
NOW IN BLACK AMERICAN SLANG
A.I. Playin’ Tricks, 🤖: How Easy Is It To Get Fooled By A.I.-Detection Tools❓
We in this crazy world 🌍 where you ain’t sure no more ’bout what’s real. It’s like, did the Pope rock Balenciaga? Did we really shoot the moon? 🌑 Seems like we got these mad lifelike pics all over the net, all thanks to A.I., messing with our heads, man. Got folks trippin’ ’bout what’s real and what’s fake. 💭
Now, there’s a whole bunch of companies poppin’ up, sayin’ they can tell you what’s legit and what’s not. They got these high-tech tools 🛠, algorithms and all, checking out these pics, trying to figure out which ones are computer-generated and which ones are legit. But yo, some tech heads and misinformation experts are buggin’ ’cause they think A.I. is always gonna be one step ahead. 🧐
New York Times 🗞, they went all out, put five of these A.I.-detection services to the test with over a hundred different pics, some real, some A.I. Turns out, these services are stepping up their game, but still got a long way to go.
Think ’bout this – a pic of Elon Musk, that billionaire dude, looking like he’s all buddy-buddy with a robot 🤖. This was done by some A.I. artist and even though it looks crazy, it managed to fool several A.I. detectors. Yeah, even those paid ones like Sensity and the freebies like Umm-maybe’s A.I. Art Detector. They couldn’t spot that it was A.I. work, man. 😵💫
These detectors, they just look for strange patterns, weird sharpness, contrast, stuff like that. But they totally miss out on context, so they don’t get that Musk chilling with a robot ain’t likely. That’s the problem when you’re too reliant on tech to spot the fakes.
But check this, companies like Sensity, Hive, and Inholo, the ones behind Illuminarty, they didn’t dispute the test results. They’re saying their systems are always leveling up, keeping pace with A.I. advancements. Hive even said their misclassification might be due to analyzing low-quality images 🌫. But Umm-maybe and Optic, the company behind A.I. or Not, they didn’t say a word.
For the test, New York Times 📰 got A.I. images from artists and researchers familiar with generative tools like Midjourney, Stable Diffusion, and DALL-E. These tools can create lifelike pics of people, animals 🐅, nature 🌿, buildings 🏠, food 🍲, and all kinds of stuff. The real images were straight from The Times’ archive.
A.I. detection is seen as a way to protect against harmful A.I. pics. But A.I. experts like Chenhao Tan, who’s an assistant professor of computer science at the University of Chicago, ain’t so sure.
He says, “I don’t think they’re great, and I’m not optimistic that they will be. In the short term, they might be somewhat accurate, but in the long run, anything special a human does with images, A.I. will be able to recreate as well, and it will be very difficult to distinguish the difference.” 🔄
And there’s our A.I. dilemma, y’all. Watch out!
NOW IN ENGLISH
A.I. Playing Games 🤖: How Simple is it to be Deceived by A.I. Detection Tools❓
Listen up, here’s the deal. We’re living in this bewildering world 🌍 where certainty about what’s real has become elusive. It’s as if, did the Pope really wear Balenciaga? Did we actually land on the moon? 🌑 There seems to be an influx of hyper-realistic pictures on the internet, all thanks to A.I., causing some confusion. It has people questioning what’s genuine and what’s not. 💭
As a result, numerous companies are emerging, claiming they can differentiate between what’s authentic and what’s artificial. They have these sophisticated tools 🛠, algorithms, and more, analyzing these pictures, aiming to discern which ones are computer-generated and which ones are genuine. However, some technology enthusiasts and misinformation specialists are apprehensive because they believe A.I. is always going to be a step ahead. 🧐
The New York Times 🗞 conducted an exhaustive analysis by putting five of these A.I.-detection services to the test with more than a hundred different pictures, some real, some artificially intelligent. The results indicated that these services are indeed improving, but they still have a long journey ahead.
Consider this – a picture of Elon Musk, the billionaire, appearing as if he’s friends with a robot 🤖. This was created by an A.I. artist and even though it looks unrealistic, it managed to deceive several A.I. detectors. Yes, including those premium ones like Sensity and even the free ones like Umm-maybe’s A.I. Art Detector. They were unable to identify that it was the work of A.I. 😵💫
These detectors merely look for anomalies in patterns, odd sharpness, contrast, and such. But they completely overlook the context, hence they fail to realize that Musk socializing with a robot is unlikely. This becomes a problem when we rely too heavily on technology to identify the fakes.
Interestingly, companies like Sensity, Hive, and Inholo, those behind Illuminarty, did not contest the test results. They claim that their systems are continually evolving, matching pace with A.I. advancements. Hive even suggested that their misclassification might be due to the analysis of low-quality images 🌫. But Umm-maybe and Optic, the company behind A.I. or Not, chose to remain silent.
For the test, The New York Times 📰 procured A.I. images from artists and researchers proficient with generative tools like Midjourney, Stable Diffusion, and DALL-E. These tools are capable of creating lifelike pictures of people, animals 🐅, nature 🌿, buildings 🏠, food 🍲, and many other things. The real images were obtained directly from The Times’ archives.
A.I. detection is perceived as a protective measure against harmful A.I. pictures. But A.I. experts like Chenhao Tan, an assistant professor of computer science at the University of Chicago, aren’t so confident.
He expressed, “I don’t think they’re great, and I’m not optimistic that they will be. In the short term, they might be somewhat accurate, but in the long run, anything special a human does with images, A.I. will be able to recreate as well, and it will be very difficult to distinguish the difference.” 🔄
That’s the A.I. dilemma we’re facing. Beware!