From Automation to Innovation

From Automation to Innovation

How Generative AI is Revolutionizing Business Processes

Sebastian Holte and Tim Bublies

Beyond the Buzz: What Generative AI Really Means for Marketing Communications

Generative AI (GAI) has captured the attention of marketing teams worldwide – for good reason. Tools such as ChatGPT, Midjourney, and Sora promise unprecedented speed, flexibility, and creative power. From generating ad copy to producing entire campaigns in minutes, GAI is transforming the way brands speak to consumers. But behind the surface of text generation and image creation lies a far more complex and strategic transformation. What criteria should companies consider when selecting the right models? What happens to team roles when entry-level creative tasks are automated? And how do customers really feel when they know an algorithm is behind the message?

A recent review of 45 academic articles conducted at the University of Mannheim sheds light on the deeper layers of this shift. The findings reveal not only novel opportunities but also subtle tensions and unresolved challenges that marketers can no longer ignore. The following insights go beyond the hype and offer a roadmap for how GAI is changing marketing communications in ways that are both promising and provocative.

From Foundation Models to Strategic Choice: Why Model Selection Matters

Not all GAI tools are created equally and the choice between general-purpose, domain-specific, or hybrid models is far from trivial. General-purpose models such as GPT-4 are ideal for early-stage ideation, offering breadth, ease of use, and low setup costs. But when it comes to tailoring content for specific industries or aligning outputs with brand voice, these models often fall short. Domain-specific models, such as BloombergGPT, or tools trained on proprietary data deliver more precise, reliable, and brand-consistent content. Hybrid approaches, which combine general capabilities with internal data fine-tuning, are increasingly regarded as the gold standard.

The strategic takeaway? Model choice is not just an IT question, but determines brand distinctiveness, data governance, and the long-term adaptability of marketing teams. Organizations focused on differentiation and brand identity must invest beyond off-the-shelf solutions or they risk commoditizing their voice.

Reshaping the Team: Modular Workflows, Fewer Silos, New Skills

One of the most transformative effects of GAI lies within the organization. By automating labor-intensive tasks such as image sourcing, video production, and copywriting, GAI significantly reduces the need for specialized roles in design and content execution. This shift enables more modular and iterative workflows: campaigns that previously required weeks or even months of coordination between copywriters, designers, and editors can now be drafted and refined in days. Tools such as Synthesia and Udio enable individual employees to produce studio-grade videos without extensive technical expertise.

While this flexibility expands creative possibilities, it also changes the nature of marketing work. Entry-level positions in the creative field, which have long been considered training grounds for future strategic thinkers, may disappear. Without these stepping stones, companies risk weakening their internal talent pipeline. To stay ahead, marketing leaders must rethink capability building. Strategic prompting, ethical review, and cross-functional literacy will matter more than ever before. The creative team of the future may not be larger, but it will need sharper capabilities, faster execution, and a strong command of AI tools and processes.

Hyper-Personalization or Data Overreach? The Consumer Trust Dilemma

GAI enables marketers to go far beyond traditional segmentation. Today, content can be tailored not just to age or gender, but to political beliefs, moral values, or personality traits in real time – all inferred from digital consumer behavior. Add contextual signals such as location, environment, or weather, and you get what is called “hyper-personalization” or “segments of one”.

Yet with this power comes a dilemma. On one hand, consumers appreciate relevance. Personalized messages can increase click-through rates, deepen emotional engagement, and enhance customer experience. On the other hand, consumers also react negatively when they feel surveilled or manipulated. Recent studies reveal this tension clearly: AI-generated messages are often perceived as less authentic, particularly in emotionally charged or hedonic settings where values, identity, or empathy play a central role. However, disclosure of AI-generated content does not automatically promise improvement. While some consumers value transparency, others interpret it as a warning sign or “red flag”, triggering discomfort and distrust.

The implication is clear: brands must carefully balance the promise of personalization with growing concerns about privacy. Personalization should feel helpful, not intrusive. And AI-generated content must be evaluated not only for performance, but more importantly for perception.

Synthetic Influence: Where Immersion Meets Manipulation

Virtual influencers, deepfake ads, and synthetic brand personas are no longer science fiction. Major brands such as Dior, Levi’s, and Coca-Cola already use AI-generated characters to promote products and interact with audiences. The realism of these avatars, which are sometimes indistinguishable from real people, opens new, creative opportunities. Yet the emotional reaction is mixed. Consumers are fascinated by the innovative immersive experience but also unsettled by the artificiality. Realism without human warmth can lead to eeriness in the form of identity discomfort, moral ambiguity, and the sense of being deceived. This can undermine trust, especially when the synthetic figure adopts marginalized identities without authentic representation. As the use of synthetic advertising grows, so do the stakes. Marketers must not only assess how visually appealing content is, but also whether it aligns with their own brand values, respects identity boundaries, and preserves emotional credibility.

Prompt Engineering and the Missing Playbook

Despite the rise of GAI tools, the academic community and business world still lack clear frameworks and guidelines for responsible implementation. While models continue to evolve, best practices for prompt design, bias mitigation, or content governance remain scarce. Currently, effective “prompt engineering” (the art of instructing GAI to deliver useful and brand-aligned outputs) is still a trial-and-error task. Few marketers have formal training in this area, and only a handful of academic studies provide structured guidance. The same holds true for ethical oversight. Companies such as Coca-Cola have started to institutionalize GAI oversight by appointing managers for balancing experimentation and control. However, the broader field lacks agreed standards on hallucination control, misinformation risk, or inclusive representation. This gap poses a critical challenge for companies: without internal guidelines, informal practices may prevail and lead to reputational risks, regulatory pitfalls, or missed strategic opportunities. Building internal literacy, documentation standards, and feedback systems is not an option, but the foundation for sustainable AI use in marketing communications.

The Way Forward: Beyond Tools Toward Transformation

GAI is not just a new set of tools, but a new logic for how marketing operates. It breaks down silos, redistributes creative forces, and requires new forms of strategy, ethics, and talent development. For companies willing to look beyond the novelty to the structure, the opportunities are immense. But only if they act consciously: by choosing the right models, redesigning workflows, respecting consumer boundaries, and investing in internal intelligence. The brands that succeed in this new environment will not just be good at generating content. They will be the ones that master the balance between automation and authenticity – and turn GAI not into a shortcut, but into a competitive advantage.

References
  • Brüns, Jasper David and Martin Meißner (2024), “Do you create your content yourself? Using generative artificial intelligence for social media content creation diminishes perceived brand authenticity,” Journal of Retailing and Consumer Services, 79, 103790.
  • Campbell, Colin, Kirk Plangger, Sean Sands, and Jan Kietzmann (2022), “Preparing for an Era of Deepfakes and AI-Generated Ads: A Framework for Understanding Responses to Manipulated Advertising,” Journal of advertising, 51 (1), 22–38.
  • Cillo, Paola and Gaia Rubera (2024), “Generative AI in innovation and marketing processes: A roadmap of research opportunities,” Journal of the Academy of Marketing Science.
  • Cui, Yuanyuan (Gina), Patrick van Esch, and Steven Phelan (2024), “How to build a competitive advantage for your brand using generative AI,” Business Horizons, SPECIAL ISSUE: WRITTEN BY CHATGPT, 67 (5), 583–94.
  • Davenport, Thomas, Abhijit Guha, Dhruv Grewal, and Timna Bressgott (2020), “How artificial intelligence will change the future of marketing,” Journal of the Academy of Marketing Science, 48 (1), 24–42.
  • Feuerriegel, Stefan, Jochen Hartmann, Christian Janiesch, and Patrick Zschech (2024), “Generative AI,” Business & information systems engineering, 66 (1), 111–26.
  • Grewal, Dhruv, Cinthia B. Satornino, Thomas Davenport, and Abhijit Guha (2024), “How generative AI Is shaping the future of marketing,” Journal of the Academy of Marketing Science.
  • Hartmann, Jochen, Yannick Exner, and Samuel Domdey (2024), “The power of generative marketing: Can generative AI create superhuman visual marketing content?,” International journal of research in marketing.
  • Huang, Ming-Hui and Roland T. Rust (2024), “The Caring Machine: Feeling AI for Customer Care,” Journal of marketing, 88 (5), 1–23.
  • Kim, Inhwa, Chung-Wha Ki, Hyunhwan Lee, and Youn-Kyung Kim (2024), “Virtual influencer marketing: Evaluating the influence of virtual influencers’ form realism and behavioral realism on consumer ambivalence and marketing performance,” Journal of business research, 176, 1–19.
  • Kirk, Colleen P. and Julian Givi (2025), “The AI-authorship effect: Understanding authenticity, moral disgust, and consumer responses to AI-generated marketing communications,” Journal of business research, 186.
  • Korinek, Anton (2023), “Generative AI for Economic Research: Use Cases and Implications for Economists,” Journal of economic literature, 61 (4), 1281–1317.
  • Kshetri, Nir, Yogesh K. Dwivedi, Thomas H. Davenport, and Niki Panteli (2024), “Generative artificial intelligence in marketing: Applications, opportunities, challenges, and research agenda,” International journal of information management, 75.
  • Lee, Gun Ho, Kyoung Jun Lee, Baek Jeong, and Taekyung Kim (2024), “Developing Personalized Marketing Service Using Generative AI,” IEEE access, 12, 22394–402.
  • Ma, Liye and Baohong Sun (2020), “Machine learning and AI in marketing – Connecting computing power to human insights,” International journal of research in marketing, 37 (3), 481–504.
  • Tomaselli, Angelo and Oguz A. Acar (2024), “How GenAI Changes Creative Work,” MIT Sloan management review, 1–4.3

Leave a Reply

Your email address will not be published. Required fields are marked *