Mark Loveless, aka Simple Nomad, is a researcher and hacker. He frequently speaks at security conferences around the globe, gets quoted in the press, and has a somewhat odd perspective on security in general.

Why I Don’t Hate AI

Why I Don’t Hate AI

Artwork generated by Dreamlike.art.

There have been numerous and quite lively discussions surrounding the use of artificial intelligence (AI) in various media outlets, with a lot of it focusing on what AI does not get right. All of the controversy such as how various jobs will be lost, artists will be replaced, and further signs of the Singularity coming to end society are playing quite well.

But I personally don’t hate AI. Let me explain.

It’s Not New

Dedicated and specific AI has been around in one form or another for decades. For my fellow Infosec professionals, heuristic algorithms, found in anti-virus/IDS/IPS style products, are an example of AI we are used to.

I’ve seen custom AI projects coded up in sensitive and even classified settings, where they are looking for specific patterns that can at best be only “loosely grouped” to define a “match”. A human could do the same work, however you’d need an army of humans examining thousands and thousands of data elements so the human thing doesn’t scale. But to replicate the human’s process, AI can be used to do it at a much faster rate.

If you’ve used Siri, Alexa, or similar technology, you’re using AI. If you’ve used a biometric identifier on your phone or computer such as a fingerprint reader or facial recognition, you’re using AI. Auto-complete? AI. It is specific AI that performs specific tasks and specific tasks only.

Many early users of these dedicated and specific AI implementations were quite critical of them (some people still are) but they are arguably much better than they were before. They are certainly widely accepted.

So What is New?

ChatGPT is getting a lot of press lately. It is what some academics refer to as an application-integrated Large Language Model (LLM) AI, and LLMs are the “change” we are now experiencing that is really generating a ton of press. ChatGPT’s integration into an application is via a web interface. I don’t think I need to post links to some of the examples of either factually incorrect information offered up or safety guidelines being bypassed by clever prompting with ChatGPT - there are plenty of examples online.

The main difference between a type of specific AI and an LLM is that specific AI applications know a lot about one thing, and an LLM knows a little about a huge amount of things.

True Areas of Concern

There are a few areas of concern that I have.

Control of the data. In a world where everyone wants credit for their work, credit when their work is sourced, and permission before their data is used by someone else, it is unclear who exactly is overseeing this process of controlling the data being used to “source” LLMs and what authority and motives they have in doing so. The motive part bothers me the most, as we don’t have some non-profit scientific-minded group of humble scholarly stewards of knowledge pulling in and massaging this data that drives the AI, it is large for-profit corporations and eager startups pushing for funding for the next round or eventually IPO or buyout.

Expectation levels. I am not sure why the quality of AI presented on some science fiction shows is accepted as fine, but the real version we get is not. Is it that the main computer on the Star Trek Enterprise which clearly has an AI interface is just accepted because it appears to never make mistakes and its motives are pure (present the data being asked for), or is it that it lines up from an expectation standpoint with Siri or Alexa just fine? What about the Star Trek character Data? Clearly this is an LLM that is refined, is this our expectation level is for something like ChatGPT? Data never just tried to sound smart, Data was smart in that the information presented was accurate and based upon facts. The data source for most LLMs seems based upon the Internet more or less, and that is a questionable source. I am concerned that some levels of AI are accepted and some are not, even though all of them still need refinement.

Refinement. Working off of the last point, we as consumers of Internet information have to learn how to sift through the lies, exaggerations, and satire that exists online to get to the truth, whereas current AI hasn’t been set up to do this. Until this happens, or at least happens openly, I will have concerns about an AI implementation that cannot possess both truth and untruth without telling the difference between the two. And to me, this is critical. Of course this brings up who should be instructing the AI about truths and lies. Putting specific disinformation campaigns aside, if the person doing the refinement believes certain things that are not in line with society in general, or (more likely) have beliefs that only half or a third of society believes, is society going to accept the output of the AI implementation?

The Singularity. This idea that AI will become self-aware, or more or less be able to mimic the human brain’s ability to process and analyze data is in my opinion something that will happen. If one applies John Searle’s Chinese room thought experiment to the idea of self-aware AI, we will not be able to distinguish the difference between AI that actually is self-aware and AI that can convince us it is self-aware when it is not. I personally am not afraid of The Singularity happening, but I am concerned about the societal impact - mainly the reaction, backlash, and usual turmoil from society now dealing with this. I could be wrong, I mean I was wrong with this whole UFO thing when the US Government started discussing videos of UFOs that fighter pilots took from their aircraft and stated “no, we have no idea what this is”. The world basically said “meh” and went right back to what it was doing, seeming to care very little about it, as opposed to freaking out about it. So maybe I shouldn’t worry about this as most people probably will just “meh” and go back to whatever they were doing.

Conclusion

I am looking forward to the potential benefits of AI. There will always be those that don’t accept it, but I personally feel the advantages will outweigh the disadvantages. It’s still ironic that most people don’t think twice about “researching” a topic on the Internet to help themselves to do something, but if a machine automates the task we question the output. I think the main point of contention is a combination of fast changes and a feeling of a loss of control. Once we get a handle on that, I think we can handle and even help refine AI into its full potential.

An Interview with ChatGPT

An Interview with ChatGPT

My HP Dev One

My HP Dev One