The Dangers of Generative AI Technology - Episode 303 - Show Notes

The Dangers of Generative AI Technology - Episode 303

Sunday Jan 29, 2023 (00:36:00)


In the last few months, we've seen a significant improvement in generative AI. This technology allows a user to enter a prompt to create media. The output has generally been text and images, but audio and other forms of media can also be created. There have been a lot of platforms in this field as of late, but none have been quite as successful as ChatGPT.

The rise of ChatGPT

The platform that's become most popular is called ChatGPT. Essentially, the platform allows you to ask it questions in plain language and it will answer in kind. You can ask it many types of questions, as well as give it a lot of types of commands. Similar to WolframAlpha, you can ask science, math, and engineering questions and get answers. But the generative features are the most interesting, as well as the most frightening, to people.

You can also ask ChatGPT to write content for you and it will. For example, Avram asked it to write a how to on building a PC. The article that was created looked good at an initial glance. The words were all in a recognizable order. The sentence structure was accurate, for the most part. However, the instructions that were given would have broken the processor.

During the segment, he tried again. Rather than building a PC, Avram asked the AI to write a how to on setting up a Rasperyy Pi. The result was similar. The writing seemed normal enough at first, but some cracks began to emerge with further investigation. The content was vague, sometimes inaccurate or outdated, and not always complete.

For example, it gives a list of requirements, but later asks you to use something that it didn't ask you to get. It also confused SD and microSD cards. Plus, the operating system name was wrong. If you've done it a lot, you might not notice, or will correct the mistakes in your head. But, first timers are not who how to content is for, so it makes the mistakes embarrassing at best and dangerous at worst.

Generative AI in the wild

Recently it was revealed that CNET was using an AI to create some of its content. The company was not open about its use of the technology, and really only came clean when issues were discovered. In particular, it was a piece including some confusing information about interest, mixing up the total account balance with the interest earned. Basic interest on a $10,000 account will not be $10,300 within a year, no matter how great of a bank you have.

In addition to content errors, it was also discovered that the CNET AI had been plagiarizing content. In fact, the output appeared to be written by a middle schooler, taking a sentence written by someone else and changing just a word or two, pretending to make it their own. The publication Futurism has been tracking the issues, which seem to be extensive.

Despite the backlash from the industry, and the discontinuation by CNET, other publications have said that they are also planning to implement AI writers. BuzzFeed, for example, is planning to use the technology. The good news there is no one expects BuzzFeed to have accurate information, so they're pretty safe.

The near future

Obviously, the technology is not ready for primetime. But, more importantly, it's never going to be good. AI cannot have human experiences. AI cannot interview someone. AI cannot get a scoop. AI cannot break news. All of these things require human intervention. AI can only build upon the existing work of people.


Scott Ertz


Scott is a developer who has worked on projects of varying sizes, including all of the PLUGHITZ Corporation properties. He is also known in the gaming world for his time supporting the rhythm game community, through DDRLover and hosting tournaments throughout the Tampa Bay Area. Currently, when he is not working on software projects or hosting F5 Live: Refreshing Technology, Scott can often be found returning to his high school days working with the Foundation for Inspiration and Recognition of Science and Technology (FIRST), mentoring teams and helping with ROBOTICON Tampa Bay. He has also helped found a student software learning group, the ASCII Warriors, currently housed at AMRoC Fab Lab.

Avram Piltch


Avram's been in love with PCs since he played original Castle Wolfenstein on an Apple II+. Before joining Tom's Hardware, for 10 years, he served as Online Editorial Director for sister sites Tom's Guide and Laptop Mag, where he programmed the CMS and many of the benchmarks. When he's not editing, writing or stumbling around trade show halls, you'll find him building Arduino robots with his son and watching every single superhero show on the CW.

Live Discussion

Powered by PureVPN

We're live now - Join us!



Forgot password? Recover here.
Not a member? Register now.
Blog Meets Brand Stats