How we are using AI to handle the bulk of badly written press releases

How we are using AI to handle the bulk of badly written press releases
We have long – extremely long actually – endured the horror of badly written press releases. There is the ‘acclaimed‘ issue as I talked about in the past, but there is more. Too often press releases are so cryptic they could easily double as bad modernist poetry. Nice for the poet, less nice for the people at Side-Line writing news items, mostly me that is. And then AI arrived…
The amount of news we received after ChatGPT went live, went up exponentially, unfortunately the quality of the press copy went down very fast. The reason, a lot of bands, labels and PR people simply have no clue how to use ChatGPT properly and generate all kinds of no-sense copy.
Time to address the problem at its core and tackle both badly written copy and AI fluffiness.
Table of contents
Training an AI to survive the PR trenches
To address the growing flood of chaotic press releases, we kicked off a dedicated AI project in late 2024. It started by training artificial intelligence on our own carefully curated archive – tens of thousands of news articles formed by years of editorial practice. Over several weeks, the model was taught to internalize our writing style, priorities, and the all-important whys and hows behind our coverage choices.
With that groundwork in place, we moved to the real challenge: feeding the AI a batch of 7,854 press releases, spanning the full quality spectrum from respectable to catastrophic. Over the course of about one month, the system methodically tore through them, identifying structural flaws, stylistic pitfalls, and common patterns of confusion.
Our ultimate aim: reduce the hours lost to deciphering incoherent announcements and redirect that energy into producing more articles, which was simply impossible before due to the waste of ‘deciphering’ time.
Instead of relying on off-the-shelf AI models, we took a true DIY approach. By training exclusively on our own content – the thousands of music news stories, interviews, and announcements published over the years – we ensured the model absorbed the real-world cadences, clichĂ©s, and peculiarities specific to the music scene. From grandiose album announcements to festival lineup hyperbole, the AI learned exactly what makes music news tick – and what usually makes it stall.
Technically speaking, we fine-tuned transformer-based models (custom GPT and T5 variants) to:
- Decode announcements buried under layers of marketing fluff.
- Normalize genre-specific jargon into readable text.
- Extract essential metadata like release dates, tracklists, guest artists, and tour locations.
- Flag incomprehensible “artistic manifestos” for manual review.
This tailored training approach allowed the AI to not just “understand English,” but to understand music journalism English – a crucial distinction.
Below are the 3 steps we take in this process.
1. Automated triage: Because not every comeback single is ‘epoch-defining’
Our AI first sorts incoming material into three neat piles:
- Green: Coherent enough to move directly to light editing.
- Yellow: Containing actual news, but structurally mangled beyond recognition.
- Red: Philosophical treatises disguised as press releases, politely returned to sender.
This triage system – based on real patterns in our website’s historical content – has become essential to maintaining both sanity and journalistic standards.
2. Teaching the AI taste (or, how not every ‘genre-defying’ album matters)
We don’t pretend our AI has “good taste” in the aesthetic sense – but it does have a “clarity bias” informed by years of our own editorial standards. We developed a custom “readability and relevance” score, factoring in:
- Passive voice density
- Jargon-to-meaning ratio
- ClichĂ© usage frequency (“soaring vocals,” “blistering guitar riffs”)
- Quote coherence
Press releases that score below our minimum threshold are either comprehensively reworked or quietly archived in the “not today” (or “never again”) folder.
3. Rewriting with surgical precision
When confronted with a yellow-tier disaster, the AI operates methodically. It:
- Extracts what should have been the lead: the “who, what, when, where, why.”
- Restructures meandering paragraphs into crisp news copy.
- Salvages quotes (or what passes for them) and attributes opinions properly.
Our technical process includes:
- Semantic parsing to understand the story beneath the word salad.
- Coreference resolution to untangle who “they” and “it” actually refer to.
- Content summarization tuned to the pacing and voice of music news articles.
Named Entity Recognition (NER) models spot and tag artist names, album titles, festival names, and more, feeding directly into our SEO optimization and archival tagging system.
To strengthen the system further, we also integrated external fact-checking mechanisms. Using APIs connected to authoritative music databases, and official label release feeds, the AI cross-verifies dates, album titles, tracklists, and artist names against real-world data.
Whenever a press release claims a “debut album” that is actually the band’s third – or mislabels a tour location – the AI flags the inconsistency automatically. This added layer of verification ensures not only faster processing but also a noticeable boost in overall accuracy and reliability across all published pieces.
The final output is then again manually revised, ‘issues’ are flagged and fed back to the AI, and the SEO is finetuned.
AI cleans up the mess, but the bands (and PR agents) still make it
Since integrating AI into our editorial workflow:
- We reduced manual editing time by 65%.
- Increased the number of articles by 40%.
- Spent 80% less time trying to decipher which “visionary act” was “reshaping the sonic landscape.”
In short, AI has freed up our time to focus on what matters: real stories about real music, not just decoding PR hyperbole.
Is it faultless? No, the system needs constant training and it only gets as good as the data gets. Our AI is not a miracle worker. It cannot turn a bland album launch into breaking news. What it can do – and does – is clean up the debris field of badly written press releases, using the hard-earned editorial instincts we’ve embedded into it through years of music journalism experience.
One day, perhaps, artists and labels will craft announcements that don’t require algorithmic CPR. Until then, our AI remains on the front lines, armed with a mop, a red pen, and a deep, machine-learned understanding of just how weird this industry can be.
Since youâre here âŠ
⊠we have a small favour to ask. More people are reading Side-Line Magazine than ever but advertising revenues across the media are falling fast. Unlike many news organisations, we havenât put up a paywall â we want to keep our journalism as open as we can - and we refuse to add annoying advertising. So you can see why we need to ask for your help.
Side-Lineâs independent journalism takes a lot of time, money and hard work to produce. But we do it because we want to push the artists we like and who are equally fighting to survive.
If everyone who reads our reporting, who likes it, helps fund it, our future would be much more secure. For as little as 5 US$, you can support Side-Line Magazine â and it only takes a minute. Thank you.
The donations are safely powered by Paypal.