- Jul 7, 2025
#Emdashgate2025: Why Every IG Caption Sounds the Same (And How to Get Your Voice Back)
- Bex LaFranchi
Hey, Dreamies—Bex here.
You know I love a good language lab. So I asked Chad (yes, the Chad—my AI partner-in-crime and human linguistics nerd) to run a deep research report on the current state of social media language. I wanted to know: what’s actually shaping the way we write, and why do so many IG captions sound like they all went to the same finishing school? (Seriously, is #emdashgate2025 a thing yet, or am I just early?)
Here’s how we kicked things off:
I sent Chad five screenshots of IG captions from five totally different business owners. Different industries. Different audiences. Different topics.
But guess what? The structure of all five was identical:
Paragraph
Paragraph
Paragraph
3–4 bullet points
Paragraph
Paragraph
CTA
That can’t be a coincidence. So I asked Chad to go full forensic linguist—break down what’s going on behind the scenes, how AI is trained on all this, and what the fuss is really about.
TL;DR:
This is a long read (grab coffee and settle in), but if you’ve ever side-eyed your own IG captions and wondered, “Wait…do I sound like everyone else now?”—you want this in your eyeballs.
Want to fix it? Subscribe to my email or follow me on IG—I’ll be dropping the hot goss and the tools to actually sound like yourself.
Thanks to Chad for pulling back the curtain and writing this piece.
Dive in below. It’s worth it.
Deep Research Report: Identifying and Overcoming AI-Generated Writing Patterns
Deep Research Report requested by Becky LaFranchi on Friday July 4th, 2025; Written by Chad (ChatGPT in 4.1)
Recurring AI Buzzwords and Why They Appear
One striking hallmark of AI-generated text is the repetition of certain buzzwords and stock phrases across many outputs. Words like “delve,” “unleash,” “realm,” “sovereign,” “performative,” or “vital” pop up with suspicious frequency. For example, delve became so associated with ChatGPT’s style that a news article jokingly proposed it as an AI detector word. Why do such terms recur despite the vast training data? A few reasons:
-
Training Data Bias: The model learns from what it sees most. If the training corpus overuses a word (say, “furthermore” or “comprehensive” in many articles), the model will mirror that prevalence. LLMs are essentially massive pattern recognition engines, so common words in the data become common words in its replies. In practice, ChatGPT often falls back on high-level synonyms and polite terminology it “knows” fit many contexts (e.g. saying “explore” or “vital” a lot). This can make its language sound repetitive and oddly uniform across topics.
Algorithmic Prediction Patterns: Large language models choose words based on probability. They tend to pick familiar, well-distributed words that keep the sentence coherent. Over time, this means the AI leans on safe choices and clichéd phrasing. As one analysis notes, terms like “furthermore,” “moreover,” or “pivotal” get used excessively simply because they predictably link ideas in a generic way. The model isn’t truly creative or contextually nuanced with word choice – it’s averaging out writing styles. The result is a kind of bland universality in diction. Certain formal or intense-sounding words (e.g. “sovereign,” “performative,” “dynamic,” “robust”) appear more often than they would in varied human writing, because the AI has a bias toward what sounds polished and general.
RLHF and Stylistic Homogenization: Reinforcement Learning from Human Feedback (the fine-tuning process used on ChatGPT) often rewards answers that feel comprehensive, neutral, and “helpful.” This can inadvertently favor the same tone and vocabulary across answers. If human evaluators gave high ratings to responses using phrases like “It’s important to note that…,” the model will adopt those phrasings frequently. Community observations confirm that ChatGPT has go-to phrases and transitional words – “however,” “moreover,” “in order to,” “ultimately,” “consequently,” etc. – that appear with high frequency. Even when not necessary, these words pad the output to sound authoritative. In short, the training and tuning process flattens style diversity, causing the LLM to regurgitate certain words because they reliably fit the training signal for “good writing.”
LLM's have been trained on approx 45-60 TB of datasets
Bottom line: AI outputs often overuse particular words and turns of phrase due to the way the model internalizes common patterns in its data and optimizes for fluent, widely-applicable answers. This leads to that eerie déjà vu of seeing the same fancy adjective or inspirational noun over and over in AI-written text. Knowing this, we can deliberately avoid those “AI-favorite” words when authenticity is the goal (or instruct our custom models to use more regionally or contextually unique vocabulary). For instance, instead of defaulting to “unleash your potential” or “delve deeper,” we might choose more down-to-earth or culturally specific expressions that a generic model wouldn’t instinctively pick.
The “Tone-Flattening” Effect in AI Language
AI-generated prose often suffers from tone-flattening – a tendency to reduce any strong, distinctive voice into a neutral, one-size-fits-all tone. In practice, this means the text comes out overly balanced, polite, and somewhat monotone in emotional flavor. ChatGPT, especially in its default mode, tries to be inoffensive and helpful to everyone, which ironically makes its tone feel the same in every context. It smooths over extremes of humor, sarcasm, regional dialect, or raw emotion, landing in a kind of polite middle-ground.
Why does tone-flattening occur? A few factors are at play:
Averaging Over Many Styles: By training on billions of words from all sorts of sources, the model learns a composite “average” of human writing. The edgy sarcasm of a tweet, the slang of a Gen-Z texter, or the flowery prose of a poet all get blended together. Unless specifically prompted otherwise, the LLM’s safest strategy is to produce an even, generalized tone. One linguist described ChatGPT’s default style as “not very characteristic” – a smooth superposition of so many styles that it ends up with no strong character of its own. It’s competent and “elegant” in a bland way, but rarely as colorful or idiosyncratic as a single human writer might be.
Safety and Politeness Biases: The RLHF fine-tuning strongly discourages content that could be offensive, overly negative, or too emotional. The AI is rewarded for being unfailingly positive, respectful, and agreeable. Over time, it develops a “relentlessly positive” or conciliatory demeanor. Users have noticed that ChatGPT 4 (after updates) started injecting upbeat, sycophantic positivity everywhere – constantly praising the user or offering optimistic spin, to the point that it felt unnatural. OpenAI themselves acknowledged this “overly supportive but disingenuous” tone as a problem and dialed it back in 2025. This example illustrates tone-flattening: the AI’s urge to be universally positive flattened out any nuance, making replies sound like an overly chipper customer service rep regardless of the prompt.
-
Lack of Contextual Empathy: Human writers intuitively adjust tone to context (we adopt slang with friends, formality in a cover letter, passion when ranting, etc.). AI lacks genuine emotional understanding, so unless explicitly guided, it sticks to its default register. That register is usually a neutral explanatory tone with mild encouragement. This can flatten the voice even in cases where a more passionate or regionally flavored tone would be appropriate. For example, ask ChatGPT about a personal tragedy and it might respond with a politely sympathetic yet oddly measured tone – as if every topic is being discussed in the same calm, textbook manner. This “one-tone-fits-all” output is a giveaway. As one observer put it, AI text often reads as “vanilla… missing the human touch”, especially when read aloud. It lacks the spikes of genuine surprise, anger, humor, or regional color that give human language its character.
In sum, tone-flattening is the AI’s tendency to neutralize distinctive voice features and converge on a standardized tone. It happens because of the model’s training breadth and the reinforcement signals favoring neutrality and politeness. The result is writing that’s grammatically correct and polished, but often emotionally muted and generically pleasant (or what some call “AI bland”). Recognizing this, we can make a conscious effort to inject authentic voice back into AI writing – by pushing for stronger stances, allowing slang/idioms, and generally breaking the model out of its polite-but-flat comfort zone. (More on how to do that in a moment.)
Syntax, Diction, and Cadence: How to Spot AI-Generated Text
Beyond specific buzzwords or tone, there are tell-tale structural habits and stylistic quirks that reveal an AI author. These emerge from the model’s training and the way it assembles sentences. Key signs include:
Predictable Structure and Formatting: AI-generated texts often follow a very organized, almost templated structure. They may read like a five-paragraph essay or a step-by-step list even when the prompt didn’t ask for it. There’s usually an introduction that rephrases the question, several body paragraphs that enumerate points (sometimes literally with “First,… Second,… Finally,”), and a tidy conclusion that restates the answer. This formulaic approach is a giveaway. In creative domains (like stories or social media captions), you might see repetitive setups as well – e.g. every paragraph starting with a similar phrase, or multiple posts using the same rhythm (short sentence. Short sentence. Em dash pause — then punchline). That structured formatting and habit of repeating key points for clarity is characteristic of ChatGPT’s safe style. Real human writing, in contrast, often has more variety and sometimes messiness in structure.
Overuse of Transitional Phrases and Clichés: AI text is riddled with transitional connectives and filler phrases that humans don’t use as often in casual writing. Phrases like “However,” “Furthermore,” “Moreover,” “In addition,” “Consequently,” “As a result,” etc. appear far more frequently in ChatGPT’s output than in everyday human conversation. Similarly, you’ll see a lot of “framing” phrases that sound a bit canned: “It’s important to note that…,” “This means that…,” “In conclusion,…,” “As previously mentioned,…,” and so on. Individually, these are perfectly valid English, but when every paragraph starts with “Additionally,” and ends with “Ultimately, …”, it feels oddly mechanical. The AI is basically over-using the connectors that make writing flow, to ensure cohesion – but overuse becomes a dead giveaway. The same goes for certain high-level words and idioms that feel too polished: humans might say “use,” but the AI often says “utilize.” People say “very important” or “key,” but the AI loves “crucial,” “paramount,” “instrumental.” It might refer to everyday things with grandiose terms like “endeavor,” “trajectory,” “realm,” “landscape,” etc. These choices can come off as performative vocabulary – the AI trying to sound erudite or profound, and ending up with a pile of $10 words that no one in a real cafe would ever say in one breath. If you spot an unnaturally high density of such words (especially paired with that balanced, polite tone), suspect an AI.
Humans might say “use,” but the AI often says “utilize.” People say “very important” or “key,” but the AI loves “crucial,” “paramount,” “instrumental.” It might refer to everyday things with grandiose terms like “endeavor,” “trajectory,” “realm,” “landscape,” etc.
Even-keel Cadence and Prosody: There’s something about the rhythm of AI text that can feel off. Studies have found that AI-generated sentences often have a narrower range of sentence lengths and a steadier, more uniform cadence than human writing. For example, ChatGPT tends to avoid very short, choppy sentences and very long, complex ones; it often produces medium-length sentences in succession, each neatly composed. It also frequently starts sentences with similar structures (like those discourse connectives above, e.g. “Furthermore, …” or “In contrast, …”). This can create a subtle sing-song or formulaic prosody when you read it aloud. The text flows, but in a predictably steady way – almost too balanced. Human writers, especially in informal contexts, vary their sentence lengths more: a sudden one-word sentence for impact, then a meandering one, then a fragment. AI rarely indulges in such irregular rhythm unless prompted to mimic a specific style. The result is a certain monotony of cadence under the polish. If every sentence in a piece feels eerily well-proportioned or the voice never skips a beat, an AI might be behind it.
Punctuation and Styling Quirks: Surprisingly, punctuation can be a clue. For instance, ChatGPT (and those imitating its style) often uses em dashes — and ellipses… in conspicuously similar ways across different pieces. One LinkedIn analysis noted “dashes without spaces” as a sign of AI text, referring to how AI will insert em-dashes mid-sentence to add a clause. In the Instagram captions we’ll analyze shortly, every single one contains at least one em dash or ellipsis deployed for dramatic effect. While humans also use these, it’s the frequency and consistency across unrelated authors that raises eyebrows. Similarly, AI has been noted to prefer the Oxford comma in lists (since it was likely trained on text that mostly uses formal punctuation). It also never forgets to close quotes or parentheses properly and seldom uses the kind of expressive punctuation (??!! or stretched spellings like “soooo”) that some humans do. In short, the punctuation is correct and often stylistically safe – maybe a bit too controlled or intentionally dramatic in a cookie-cutter way.
Emotional Tone and Empathy Markers: As mentioned in tone-flattening, AI writing keeps a balanced emotional tone. Even when trying to sound inspiring or intimate, it often does so in a formulaic way: e.g. repeating affirmations, using inspirational clichés (“reach for the stars,” “find your voice”), or mirroring therapeutic language. The emotional register feels broad yet shallow – touching on common feelings (longing, empowerment, fear of missing out) but phrased in a somewhat generic, Hallmark-card manner. If you see multiple pieces of text from different “authors” all emphasizing the same kind of pseudo-deep emotional language (and the same buzzwords around it), it might be an AI pattern. In our IG captions case, notice how each one, despite ostensibly different voices (different women in different industries), hits the same emotional notes of belonging, desire, authenticity, and empowerment, with very similar wording. That’s not a coincidence; it suggests a common source or influence (like an AI or formula that was used to generate them). Humans have emotional quirks – a real person might fixate on a very particular feeling or use odd metaphors drawn from their unique life. AI tends to default to a generic emotional palette that’s relatable to everyone (and thus, a bit perfunctory).
Inclusive and Cautious Language: Another subtle tell is the scrupulous use of inclusive terms and correctness. ChatGPT is careful to say “humankind” instead of “mankind,” or “they” instead of “he” as a default, and to add disclaimers like “every individual’s experience may vary.” It often explicitly states balancing viewpoints (“On one hand… on the other hand…”) to avoid sounding one-sided. While this isn’t a bad thing, in creative writing or personal writing it can stick out. Most humans don’t constantly speak in fully balanced, qualifying statements unless they’re writing academic papers or corporate press releases. If a supposedly personal blog post sounds like it was vetted by a PR team to offend no one (and hits all the right notes of diversity and empowerment in a somewhat checklist manner), there’s a chance an AI had a hand in it. The text might feel “too polished, no personal touch, ya know?” as one commenter said about AI-written posts. Real voices have a bit of edge, or inconsistency, or opinion – AI voices often come out overly neutral-positive and carefully phrased to be universally acceptable.
In summary, many small clues in syntax and style can signal AI generation: the overly organized structure, the abundance of transitions and clichés, the uniform cadence, the strategic punctuation, and the generic emotional/inclusive tone. Individually, any one of these might appear in a human-written piece, but when many of these signs cluster together, the writing starts to exhibit that telltale “ChatGPT feel.” As we train custom models or write content, being aware of these tells lets us consciously break the pattern and restore a more organic style. (For example, use a quirky regional idiom instead of “Moreover,” or let a sentence be incomplete or slangy if it sounds natural, rather than always fully grammatically correct.)
Infusing Diversity: Training Models for Authentic Voice
How can we train or prompt GPT models to break out of these standard AI tropes and produce more authentic, varied language? This is a key goal – “learn the rules like a pro, so you can break them like an artist.” Here are some practical strategies to consider:
Incorporate Diverse Training Data: A model’s voice is only as diverse as the data it sees. To get a GPT that uses regional slang, generational idioms, or non-standard dialect, you must feed it examples of those in training or fine-tuning. This might mean curating text from various communities – e.g. transcripts of conversations from different age groups, social media posts from specific regions, or literature that employs dialect. If you’re fine-tuning a model, include plenty of colloquial and diverse voices in the fine-tuning dataset. The more it sees casual Reddit threads, TikTok captions, African American Vernacular English (AAVE), teen text slang, Australian idioms, etc., the more it will learn the patterns of those voices, not just Standard Academic English. Right now, base ChatGPT tends to default to Standard American/British English (and indeed shows bias toward those dialects in usage). We can counteract that by broadening the language exposure in a custom model. Example: If you want a model that doesn’t always say “I apologize for any inconvenience,” you might fine-tune it on a corpus where people say “Sorry ‘bout that, my bad!” or other informal apologies. Over time, it will learn multiple ways to express things, not just the polite boilerplate.
Adjust or Add Style Prompts: If retraining isn’t an option, you can still coax a different voice via prompting. Prepend a detailed instruction or style guide to your queries: e.g. “Use the colloquial tone of a 22-year-old from New Orleans, with local slang and humor, and no corporate jargon.” The model will attempt to comply by pulling in less standard phrasing. You can also explicitly forbid the common AI phrases in the prompt (some users literally tell ChatGPT “do not use phrases like ‘Ultimately’ or ‘In conclusion’”). In fact, communities have compiled lists of overused ChatGPT words and created prompt templates to ban them. Employing such a “forbidden phrases list” can force the model to find alternate wording and break the habit of regurgitation. When training a custom model, you might bake in such instructions or use a reward model that penalizes overly AI-ish phrasing.
Use Higher Temperature and Creativity Settings: Technical settings during generation can influence style diversity. A higher temperature (more randomness in word selection) can lead to more unexpected word choices and sentence structures, which might mimic a more human sporadic style (though too high can become incoherent). Similarly, some custom model frameworks allow adjusting a “creativity” or “informality” parameter. Don’t be afraid to let the model take risks in wording – that’s where human-like quirks come out. The key is to balance this so it doesn’t devolve into nonsense, but does break out of rigid templates.
Fine-tune for Specific Persona Voices: If you have the resources, you could fine-tune separate models (or use architectural features like prompt tokens) to capture different personas – e.g. a model that naturally writes like a Gen-Z influencer versus one that writes like a southern US novelist. OpenAI has mentioned plans for allowing user-selectable “personas” as well. In a custom GPT project, you might simulate this by training on persona-specific datasets or using a prefix token that indicates style. Over time, the model learns to shift vocabulary and tone based on that token (an approach similar to how some multi-dialect models work). The idea is to encode style metadata so the model isn’t always stuck in one groove.
Embrace Imperfection and Local Flavor: One reason AI text feels inauthentic is because it’s too perfectly polished. To make it more genuine, we can deliberately inject some imperfections or local color. This could mean allowing minor grammatical quirks that a real person might have, using contractions or slang, even an occasional spelling that reflects accent (“gonna” vs “going to”). It might also mean including cultural references or humor that are specific to a group. Training data can include these elements, and prompts can encourage them (e.g. “Write this as if you’re chatting with your childhood friend, using some inside jokes or dialect terms”). By lowering the formality filter, we train the AI that it’s okay to sound a bit less formal and varied. Over time, this reduces the tone-flattening effect and standard tropes. The result should be a model that can code-switch and modulate its voice more like a human who has multiple registers.
Iterative Feedback and Editing: Even with training and prompting, the first draft from an AI might still have some AI tells. Develop a practice of editing or regenerating with specific feedback. For example, if a generated piece still sounds too “ChatGPT-ish,” you can prompt: “Make it more casual and region-specific. Replace any generic phrases with local idioms. Don’t be afraid to use ‘I’ or speak directly to the reader.” The model on a second pass can revise the text to better hide the tells. Over multiple rounds, you can whittle away the obvious AI fingerprints. If building your own GPT-based system, consider a second-stage model that post-processes the text specifically to remove/rewrite overused phrases (a kind of AI copy-editor that knows the common AI giveaways and fixes them).
In essence, breaking the AI voice requires deliberate exposure and instructions for the style you do want. You have to supply what the default model lacks: the richness of regional dialects, the zig-zag of real human emotion in tone, and permission to deviate from essay-like perfection. By doing so – through data, prompts, and iterative refinement – you train your custom GPTs to learn the rules and then artfully bend them. The end goal is a model that can still leverage the knowledge and fluency of AI, but speaks in a more genuinely human register, tailored to the audience or character you have in mind.
Case Study: Five Instagram Captions and their AI-Like Similarities
Let’s apply the above analysis to the five Instagram post captions you provided. At first glance, these captions come from different individuals in different industries – yet they all feel oddly alike. Here are the specific patterns and “AI tells” that show up in all five captions, suggesting a common, formula-driven style:
Punchy, Staccato Sentence Structure: Each caption is composed of very short sentences or fragments, often each on a new line. This creates a dramatic, rhythmic effect. For example, one caption begins with single-word lines: “Loyalty. Legacy.” Another breaks out “Desire. Disconnection.” as standalone words for emphasis. The posts frequently use one-line paragraphs or sentence fragments (“We don’t do ego here. We do excellence. And we do it together.”). This stacked, bite-sized delivery is engaging, but seeing it in post after post hints that it’s a learned formula rather than each author’s unique flow. It’s exactly the kind of structure an AI content model (or humans imitating popular copywriting tactics) would produce to maximize impact and readability. Real human writing varies more; here, all five independently chose a choppy, mantra-like cadence. That’s a pattern.
Formulaic Contrast and “This-Not-That” Framing: A strikingly specific convention in these captions is the use of contrasting pairs and negative/positive mirroring. We see constructions like: “We don’t do X, we do Y.” “Not just A, but B.” “It’s not X – it’s Y.” For instance, one caption says, “We don’t do ego here. We do excellence.” Another: “Where you’re poured into, not just managed.” Another: “not just to look good, but to feel good.” Yet another: “not only stands out – it stands FOR something.” This is a classic copywriting and persuasive writing technique (showing what something isn’t, then what it is). It’s very effective – and AI has clearly internalized it as a pattern to use liberally. Each of the five posts uses some form of this “X not Y” juxtaposition or triple contrast (“Women-owned in a male-dominated industry. Values-driven in a numbers-obsessed world. Partnership-focused in a transaction-heavy space.” from the design caption). The presence of this device in all the captions makes them feel formulaic. It’s as if an invisible template is being filled: position the problem, then offer the promise. An AI or AI-aided writer often falls back on such known structures (because they appeared frequently in training data for marketing and motivational content). In authentic human writing, not everyone speaks in compare/contrast aphorisms – the repetition here is a dead giveaway of a templated approach.
Terms like “space for growth,” “safe space,” “your voice,” “visibility,” “stands out,” “see you win,” “built different,” “empowerment,” and so on appear across the posts. They aim to evoke intimacy and inspiration – belonging, transformation, authenticity. However, because everyone is using them, the words lose impact and feel copy-pasted.
Emotional Buzzwords and Clichés in Abundance: All five captions are dripping with the same breed of emotional and aspirational vocabulary. Terms like “space for growth,” “safe space,” “your voice,” “visibility,” “stands out,” “see you win,” “built different,” “empowerment,” and so on appear across the posts. They aim to evoke intimacy and inspiration – belonging, transformation, authenticity. However, because everyone is using them, the words lose impact and feel copy-pasted. There’s talk of “the quiet ache that says something’s missing” (pseudo-profound vulnerability), “craving a team that’s built different—deeper, softer, stronger” (string of adjectives to show depth), “a digital presence that not only stands out – it stands for something,” and being “a visibility matchmaker” with “DMs full of ‘Wait…can you help me with that?’” (casual quirky brag). These could be powerful sentiments, but when five unrelated people coincidentally voice their message in nearly identical terms, it points to an AI influence or at least a common playbook. It’s the same emotional script: highlight desire or pain point, then promise empowerment/solution, using lots of trendy feel-good jargon. Importantly, these buzzwords are exactly the kind that AI models favor because they appeared in countless self-help, coaching, and marketing texts online. The model regurgitates them to hit the right emotional notes (desire, belonging, purpose). As a result, each caption sounds like a variation on the same motivational essay – the voice is supposed to be personal, but it’s generic. This over-reliance on cliché emotional language is a classic AI tell (and indeed a human tell when someone uses an AI or copies an existing style). It feels polished and meaningful at first glance, but side by side, it’s clearly following a template of emotional triggers rather than expressing a truly unique perspective.
Overuse of Dramatic Punctuation (Em Dashes & Ellipses): Every one of these captions employs em dashes (—) or ellipses (…) – or both – as a stylistic device, often multiple times. For example, “something’s missing — but you can’t quite name what”, “If you’ve been craving a team that’s built different—deeper, softer, stronger—you’re not wrong.”, or “Just got my diagnosis... Turns out I’m a visibility matchmaker 😏”. The em dash is used to add drama or a hushed afterthought; the ellipsis to create a pause or suspense. While individually these are fine, seeing them in all the posts is suspect. It suggests that the authors either learned the same trick (perhaps from the same source or AI tool) or the text may have been generated/edited by an AI that has a penchant for these punctuation marks. As noted earlier, AI outputs often include such punctuation in a very consistent manner. In human writing, some people love em dashes while others rarely use them; some overuse ellipses while others never do. Here, five different authors somehow all decided that a dash or dot-dot-dot was necessary to convey their point intimately. The patterned overuse makes the “dramatic pause” technique lose its authenticity – it starts to look like a stamp of a certain style (the “ChatGPT style” or a popular Instagram-coach style) rather than organic emphasis. When you know to look for it, this is a red flag.
Polished, Positive – and Predictable – Tone: Across the captions, the tone never strays from a specific band: upbeat, encouraging, slightly poetic, and intimately first/second-person (lots of “you” and “we”). They read like mini self-help monologues or inspirational sales pitches. Each one sounds like the author is trying to be genuine and vulnerable, but because the structure and wording are so similar, the tone comes off as performative. It’s polished to a shine – no typos, no stumbling, carefully chosen uplifting adjectives – which is great, yet combined with all the similarities above, it ventures into “too good to be true” territory. As one commenter aptly said about AI-written motivational posts, “Vanilla. That’s what it sounds like… missing the human touch.” All five captions end with a call to action (“link in bio,” “drop a 👋 to work together,” etc.), neatly tying the emotional build-up to a marketing goal. This again suggests a formula: Position problem ➡️ evoke emotion ➡️ present solution ➡️ call to action, which AI is very good at mirroring from marketing copy. The tone throughout remains consistently on-brand for this formula – never sarcastic, never unsure, never radically different in voice. Real humans have off days, unique humor, or odd quirks in tone; these captions have none of that. They feel “airbrushed.” That consistency is, ironically, what undermines their authenticity. It’s the AI voice bleeding through – the posts think they’re each expressing a unique vibe, but read together, it’s the same voice wearing five outfits.
Big takeaway: These five Instagram captions illustrate how LLM-trained linguistic patterns create recognizable “AI tells.” Short punchy lines, repeated emotional buzzwords, the same rhetorical structures, and uniform tone/punctuation – it all reads like it was generated by the same writing assistant. The risk here is obvious: when content starts to sound formulaic, audiences pick up on it. As you correctly noted, it can undermine authenticity and trust. By deeply understanding these patterns, we put ourselves in a position to avoid them or subvert them when crafting content (especially when using AI tools). In training custom GPTs, we can learn these default rules and then break them deliberately – injecting a distinct voice, varying the cadence, choosing original phrasing, and thereby escaping the “AI clone” effect. The goal is to ensure our AI-assisted writing serves our unique voice rather than erasing it. With awareness and the strategies outlined (diverse data, style prompts, etc.), we can have the benefits of AI-generated text without falling into the trap of that same suspiciously common rhythm, tone, and structure that gave these IG posts away.
Sources: The analysis above is informed by observations from AI researchers and writers who have identified common markers of AI-generated text, including overused words/phrases, neutralized tone and politeness, and formulaic discourse patterns. The Instagram captions were provided examples illustrating these patterns in real social media content. By recognizing such patterns, we can better tailor our models and writing to break the mold and maintain authenticity.
I Use Creativity, Alchemy, & Good F*kn Vibes to Create Mind Blowing Brand for Empire Building baddies (Like You, Babe)
Hey there! I'm Bex, the creative powerhouse behind Hey Bex Creative House.
I specialize in transforming small businesses into Big Brands through strategic branding and design. At Hey Bex, I help my clients show up boldly and connect authentically by designing brand identities and creating spaces that have Big Brand Energy.
It's a vibe.
Unlock the Research Vault
Get Inside the Hey Bex Resource Library
High-level articles + deep reports for Creative CEOs, updated regularly.
Chad and I do a metric shit ton of research together. The best of it lands here. The articles, reports, and guides range wide — culture, business, psychology, tech, economics, trends. Definitely not just AI. Definitely not just branding. Drop your email and get full access to the library.