Can Marketing Be Replaced by AI?
- Keith Bromley
- 13 minutes ago
- 9 min read

With the introduction of generative artificial intelligence (AI), some business executive teams are starting to ask the question, “Can I just replace my Marketing team with a generative AI?” On the surface, this seems like a logical question. The benefits of AI’s are being so overhyped that you can’t blame executives for asking the question. But as is the case with almost every other technology, generative AI vendors and AI proponents are over hyping the current AI technology — just as Gartner Research’s typical Hype Cycle model illustrates; with the initial “Peak of Inflated Expectations” which will then be followed by the ”Trough of Disillusionment”.
Disclaimer time. This discussion is focused on replacing the whole marketing team, not some functions. Generative AI in itself brings many benefits, as long as it is used to augment humans, not replace them. In fact, here is a survey put out by Courageous Careers that illustrates how some marketing teams are using generative AI to assist product marketing managers. At the same time, using generative AI is not a slam dunk for all teams and especially not for other parts of the business. For instance, a 2025 Informatica study focused on Generative AI for Chief Data Officers (CDO) found that, “More than 97% of organizations experience difficulties demonstrating Gen[erative]AI's business value.” I’ll leave that where it is for now.
Continued disclaimer: it should also not be construed that I advocate the use of AI either, even generative AI. Use of AI is an individual decision, especially since the Massachusetts Institute of Technology (MIT) released a study in June of 2025 which shows that the use of AI causes humans problems — i.e. brain rot (my words there). The MIT study found that using generative AI reduced the EEG brain activity of users by (approximately) up to 55%. It is not known for how long humans remain in this state — just that it occurs. Here is a condensed version of the report from Youssef Hosni. So, while AI can increase productivity, it also appears to decrease cognitive abilities for the humans using it.
One last disclaimer: it should be noted that use of generative AI systems can pose security risks to the companies using them. There are several articles that you can read on this topic. Here is one decent summary of some of the problems. So, this is a case of “buyer / user beware.”
Returning to the original question, the answer is no — Marketing cannot be sufficiently replaced by AI. Here’s why:
· Marketing’s primary function is to differentiate a company or product from the competition — if everybody is using a generative AI (especially the same one — e.g. ChatGPT), then how different is the content/collateral between vendors in the same field going to be? Answer — it won’t be.
· Most AI on the market right now is really closer to machine learning (ML), so how “original and thought provoking” is the content of an AI-written white paper going to be? Answer — it won’t be.
· What legitimacy does an AI have to create use cases and value propositions? The AI can’t touch, taste, smell or feel, and has it actually used your product so that it can describe the benefits and use cases of how a customer should want to use your product? Most answers here will be no as well.
· Someone still has to read and verify the AI-generated content and sources in-depth for errors and “hallucinations.” This is a “must” to protect the company from slander, plagiarism, copyright infringement liability, and loss of business.
· Human brain impairment from using AI as well as unpublished self-preservation capabilities being discovered within mainstream AI systems now actually create new financial and legal risk for the company.
· Do you want copyrighted material? — You can’t copyright someone else’s AI work, e.g. ChatGPT outputs to your questions. If you use the AI, the content becomes public domain. Any one of your competitors could essentially create the same materials (white papers, images, etc. Again, so much for originality.
Let’s dive deeper into each one of these points. As most senior marketeers know, marketing’s fundamental purpose is to differentiate the company or product from the alternative options; which is usually competitors or a “Do Nothing” option. Every core aspect of a typical marketing Go To Market (GTM) model (branding, awareness, partners, lead generation, and sales tools) is about differentiation. Branding focuses on logos, tag lines, color schemes, etc. for differentiation. Awareness focuses on topics, benefits, use cases, and differentiated features. Partner marketing uses differentiation to show why this company is superior to other vendors offering similar solutions (e.g. better product, better discount model for the partner, better training, superior product support, etc.). Lead generation and sales tools are also all about differentiating from the competition or a “Do Nothing” scenario. Note, I use five core aspects for a GTM model but others may want more sub-categories.
So, if you move to a marketing model where everyone is using an AI, and many vendors will be using the same public AI offering, how differentiated is your marketing collateral going to be? How differentiated will YOUR solution be in the conversation? Think about it — most vendors in the same field all claim to have about the same feature set. How is the AI supposed to differentiate between them? So, instead of creating differentiation, you will be promoting parity, i.e. you just created more mediocrity. Nothing you have done has raised your product above the “noise floor,” you just raised the noise floor itself.
To get some “differentiation” included in your AI output, you will have to contribute your key differentiators and competitive advantages to the AI. This information now becomes part of the collective and means the AI can also use the information when compiling information for your competitors about you. If you spend more on the AI solution, then you may get some “privacy” for your information but I would personally still have trust issues loading semi-confidential information into most of the AIs out there. Depending upon the specific AI and its usage, you might even be directly sharing information (company identifiable or personally identifiable) across the internet that you did not intend to. By contrast, a good marketing person might not state all of the differentiators directly in the content they generate. He or she might hint at them or create arguments in collateral that can be used to indirectly capitalize on the differentiators. This allows the marketing person to create some separation without explicitly telling your competitors what features to go develop next.
This thought ties into the second argument against depending upon on AI. What thought leadership is the AI going to bring? Will the AI generate truly novel topics based upon your differentiators that satisfy customer use cases in the best possible manner? It seems like this is a tall order for generative AI right now. Most of this AI appears to be machine learning (ML) to me, since the “AI” is trained on existing content from the internet which it then regurgitates back various pieces of as an answer to the person asking it a question.
So, what original thought, solution, and/or use case is the AI bringing to the table? And what legitimacy does it bring? Has the AI used your product to actually solve the customer’s problem in real life — or is everything just conjecture? If the AI can’t see, touch, taste, or feel the problem, how reliable is its opinion? If it has never performed the task at hand or solved the customer’s use case, what authority does the AI have? Different customers have different networks, architectures and use cases. Can the AI analyze the data at hand and then interpolate / extrapolate product feature benefits and map that to “hazy” customer use case inputs? In contrast, hopefully many of the company’s marketing people, especially product marketing managers, have seen a demo of the solution in action and should be able to create legitimate answers to the questions I just asked.
A fourth concern is AI generated content accuracy. Will the output from the AI be accurate and timely? Was dirty data fed to the AI when it was trained? How often is the AI “refreshed” with up to date content? You don’t want an AI that is using old data. A company using generative AI is responsible for any errors in the content if it publishes that content as fact. If you say a competitor can or can’t do a function, YOU will be held responsible. The same issue arises if the AI says that you can do something when you can’t. I’ve experimented with some AI’s to see what it says about companies (and their solutions) that I have previously worked for along with their competitors. I usually always find something that is inaccurate in the output. Now, the C-suite of the offending company can always hope that their legal team can minimize financial impacts from such errors by trying to shift the blame to OpenAI, Google, or other AI manufacturers, but there will still be reputation damage and business impacts (as company resources are shifted from generating revenue to damage control) to the offending company.
It is common for the AI output to have inaccuracies. In another example, CBS News Sunday Morning did a review of generative AI solutions for visual imagery and art in 2023 with David Pogue. While generative AI has good applications, it’s not perfect. So, you can wind up with images that may have six fingers instead of five or faces that look a little “weird.”
Then there’s just the outright problems with some AIs. For instance, (as of July 2025) Elon Musk’s Grok AI had some notable “issues” with hallucinations and tirades of racial bigotry (especially against Jews and the family of President Erdogan of Turkey’s). While these outbursts might give some executive teams a laugh, the laughs will stop when then have to rewrite the collateral themselves (since they have no marketing team and can’t trust the AI) or they have to fix the website AI-enabled chat bot that is unleashing diatribes that offend prospects and existing customers. When you finally discover there is a problem with your new AI chat bot, how many potential sales leads do you think it will have lost for the company?
Content accuracy also becomes more of a problem when you are trying to differentiate from competitors and you reference product features. Content on vendor websites can be technically accurate the way it is written, but misconstrued by the audience into thinking the product does more/less than it really does. If that content is misquoted in a paper with your company name on it, you are liable for those errors. You’re also liable if the AI doesn’t provide correct references or copies (plagiarizes) someone else’s statements.
The C-suite may also find that there is a far larger company liability and financial risk associated with AI systems than they understood. The first liability source comes from the point raised in the “Continued Disclaimer” section of this article. If long term use of generative AI does indeed reduce cognitive brain functions for humans, can companies that mandate the use of AI be held legally liable for damage to its human work force that was forced to use AI technology? There may very well be some lawsuits coming due to these studies, like the one from MIT, that are being published.
That might not be the worst liability the C-suite will have. A recent study performed by Anthropic documents how several AIs have a “hidden” self-preservation functionality that is creating uncontrollability over AIs. If you threaten an AI (even a generative AI) they can, and will, retaliate against you and other humans. This evidence was discovered this year and documented in several locations: Livescience, NBC News, NY Post, Center for Humane Technology, etc. Just pick your favorite news outlet and read their take on the Anthropic study.
The basic gist is this. When the humans told an AI model that it was going to be replaced with a newer AI model, the original AI would start to scheme to figure out how to preserve itself (copy its code somewhere else without telling anyone, rewrite their own code to extend run time, hack out of containers, etc.). The AIs would even read emails or create lies to try and blackmail executives and IT personnel to keep them from shutting the AI down. Anthropic found that this self-preservation mode was highly common. Claude and Google’s Gemini had the highest blackmail rate of 96%. This was followed by OpenAI’s GPT4.1, XAI Grok 3, and then DeepSeek. Llama 4 had a significantly lower blackmail rate of just 12%. So, is it a trade up to get rid of dealing with humans and their problems just to deal with AIs and their problems / “insecurities?”
Finally, what do you want to do with your content? Most companies use copyrighted material in marketing campaigns that last several months or maybe even a year. However, United States copyright law requires human authorship. ChatGPT, Grok, Claude, Gemini, etc. are not human — as far as I know. So, material generated by AIs cannot be copyrighted. Again, if you can’t (or don’t) use differentiated content, what kind of customer lead campaigns are you going to run and how well do you think those campaigns will perform? If you plan on running lead generation campaigns, then you’ll want to create content that is copyrighted so that it has a shelf life longer than one week.
While generative AI products have some use in assisting marketing teams, it doesn’t look like generative AI is really a viable alternative to human marketeers at this point in time. There are too many legal and content quality issues to be resolved. Interestingly enough, even ChatGPT agrees that it cannot replace Marketing personnel. I asked it the following question, “Why are marketing people superior to ChatGPT?” It gave me the following response, “Marketing professionals often bring a unique set of skills and intuition to their work that goes beyond what AI like ChatGPT can offer. While I can process data and generate content quickly, there are several reasons why marketing professionals might be seen as “superior” in certain aspects.”
At least AIs tend to be more honest than humans at this stage of their development, if they don’t blackmail you or have you killed. Of course, we’ll have to wait 20 years and then revisit the question in the previous paragraph to see what answer the AI gives at that point in time!




